Prosecution Insights
Last updated: April 19, 2026
Application No. 18/361,641

Generation of Real-World Object from Captured Gameplay Video

Non-Final OA §103§DP
Filed
Jul 28, 2023
Examiner
TRAN, VI N
Art Unit
2117
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Interactive Entertainment Inc.
OA Round
3 (Non-Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
83%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
46 granted / 99 resolved
-8.5% vs TC avg
Strong +36% interview lift
Without
With
+36.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
39 currently pending
Career history
138
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/24/2026 has been entered. Claim Status Claims 1, 7, and 13 have been amended. Claim 20 has been added. Claims 1-20 remain pending and are ready for examination. Rejections not based on Prior Art Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-3, 5-9, 11-15, and 17-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-2, 4, 7-8, 10, 13-14, 16, and 19 of copending Application No. 18/361,632 in view of Forster et al. (US20160314617A1 -hereinafter Forster) in view of Kasten et al. (US20240331280A1 -hereinafter Kasten) in view of Sarkis et al. (US20160005211A1 -hereinafter Sarkis). Instant application (18/361,641) Co-pending application (18/361,632) 1. A method for generating a physical object, comprising: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game; the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 5. The method of claim 1, wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. 6. The method of claim 1, wherein the virtual object is an avatar of a user of the video game. 1. A method for generating a view of an event in a video game, comprising: capturing two-dimensional (2D) gameplay video generated from a session of a video game; analyzing the 2D gameplay video to identify an event occurring in a scene depicted in the 2D gameplay video and identifying one or more elements involved in said event; further analyzing the 2D gameplay video to determine 3D geometry of the scene; using the 3D geometry of the scene to generate a 3D video asset of the event that occurred in the gameplay video; generating a 2D view of the 3D video asset for presentation on a display, wherein generating said 2D view includes determining a field of view (FOV) to apply for the 2D view, the FOV being configured to include the elements involved in the event. 2. The method of claim 1, wherein analyzing the 2D gameplay video to identify the event includes identifying and tracking movements of objects depicted in the 2D gameplay video. 4. The method of claim 1, wherein the elements include one or more avatars, and wherein determining the FOV includes adjusting the FOV to capture a front side of the one or more avatars. 7. A non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method for generating a physical object, said method including: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game, the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 11. The non-transitory computer readable medium of claim 9, wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. 12. The non-transitory computer readable medium of claim 7, wherein the virtual object is an avatar of a user of the video game. 7. A non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method for generating a view of an event in a video game, said method including: capturing two-dimensional (2D) gameplay video generated from a session of a video game; analyzing the 2D gameplay video to identify an event occurring in a scene depicted in the 2D gameplay video and identifying one or more elements involved in said event; further analyzing the 2D gameplay video to determine 3D geometry of the scene; using the 3D geometry of the scene to generate a 3D video asset of the event that occurred in the gameplay video; generating a 2D view of the 3D video asset for presentation on a display, wherein generating said 2D view includes determining a field of view (FOV) to apply for the 2D view, the FOV being configured to include the elements involved in the event. 8. The non-transitory computer readable medium of claim 7, wherein analyzing the 2D gameplay video to identify the event includes identifying and tracking movements of objects depicted in the 2D gameplay video. 10. The non-transitory computer readable medium of claim 7, wherein the elements include one or more avatars, and wherein determining the FOV includes adjusting the FOV to capture a front side of the one or more avatars. 13. A system comprising at least one computing device, said at least one computing device configured to perform a method for generating a physical object, said method including: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game, the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 17. The system of claim 13, wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. 18. The system of claim 13, wherein the virtual object is an avatar of a user of the video game. 13. A system comprising at least one computing device, said at least one computing device configured to perform a method for generating a view of an event in a video game, said method including: capturing two-dimensional (2D) gameplay video generated from a session of a video game; analyzing the 2D gameplay video to identify an event occurring in a scene depicted in the 2D gameplay video and identifying one or more elements involved in said event; further analyzing the 2D gameplay video to determine 3D geometry of the scene; using the 3D geometry of the scene to generate a 3D video asset of the event that occurred in the gameplay video; generating a 2D view of the 3D video asset for presentation on a display, wherein generating said 2D view includes determining a field of view (FOV) to apply for the 2D view, the FOV being configured to include the elements involved in the event. 14. The system of claim 13, wherein analyzing the 2D gameplay video to identify the event includes identifying and tracking movements of objects depicted in the 2D gameplay video. 16. The system of claim 13, wherein the elements include one or more avatars, and wherein determining the FOV includes adjusting the FOV to capture a front side of the one or more avatars. Regarding claim 1, claim 1 of the co-pending application 18/361,632 teaches all limitations of the instant application; however, claim 1 of the co-pending application 18/361,632 does not teach determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; the 2D gameplay video including a depiction of the virtual object; storing the 3D model to a user account; using the 3D model to generate a physical object resembling the virtual object. Forster from the same or similar field of endeavor teaches: the 2D gameplay video including a depiction of the virtual object; (see [0188]; Forster: “Alternatively the selection techniques described herein relating to camera viewpoint and area around a nominated character such as the user's avatar may be used for example to select a more complete scene, which could be arduous to manually select using a point-and-click interface.” See [0187]; Forster: “Hence as described above this may involve queueing and reviewing through recorded video of the game or re-rendered scenes of the game to identify a specific point in time comprising a scene or object of particular interest to the user.”) [The object/a nominated character reads on ‘a depiction of the virtual object’] storing the 3D model to a user account; (see [0167]; Forster: “In the first instance local printer drivers will generate drawing lists that may be sent securely to a central print queue server, together with meta data relating to the postal address of the user… In either of these cases, printing of the model may be contingent upon the payment of a fee, for example via a payment card registered with the entertainment device's network, or similarly may be contingent upon the receipt of a voucher which might be earned for example as a trophy or other in-game award, or as part of the purchase price of a game, entitling the user to the creation of a predetermined number of 3D models from that game.” See [0062]: “The entertainment device network account may be set up to include the user's real name and optionally other personal details, bank details for online payments, an indication of whether the current entertainment device is the primary entertainment device associated with the user account, and the ability to selectively transfer licenses between entertainment devices where the user account is associated.”) using the 3D model to generate a physical object resembling the virtual object. (see [0137]; Forster: “The model is then sent to a 3D printer driver, which slices the model into layers from the bottom up. These layers are then successively printed by the 3D printer as described previously.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the co-pending application 18/361,632 to include Forster’s features of the 2D gameplay video including a depiction of the virtual object; storing the 3D model to a user account; using the 3D model to generate a physical object resembling the virtual object. Doing so would capture dynamically generated or animated 3D models, such as those found in videogames, at a particularly memorable or significant point in time during gameplay. (Forster, [0009]) However, it does not explicitly teach: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Kasten from the same or similar field of endeavor teaches: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; (see [0024]; Kasten: “An incomplete point cloud 101 of a chair is captured. The incomplete point cloud 101 comprises measurements that include a set of 3D input points P={p1, p2, . . . , pN} and a text description embedding y of the incomplete object. In an embodiment, P is captured by a depth sensor such as a depth camera or a LiDAR sensor, and internal parameters of the sensor are known.” See [0021]: “During creation of a virtual world or a game, a room may be generated that contains specific items by completing incomplete scans of objects such as furniture, weapons, etc.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the co-pending application 18/361,632 and Forster to include Kasten’s features of determining a 3D content model for a virtual object that is depicted in a video game is incomplete. Doing so would reconstruct a complete 3D model of an object and maintain consistent performance. (Kasten, [0057]) However, it does not explicitly teach: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Sarkis the same or similar field of endeavor teaches: determining a 3D content model for a virtual object is incomplete; (see [0039]; Sarkis: “An anomaly detection unit 116 may analyze the 3D model to determine whether the 3D model includes an anomaly (e.g., a discontinuity of a surface, a missing or incomplete region, etc.).”) in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; (see [0039]; Sarkis: “If the anomaly detection unit 116 detects an anomaly in the 3D model, the anomaly detection unit 116 may cause the display 104 to display an indicator that identifies a location of the anomaly in the 3D model.”) the 2D …video including a depiction of the virtual object; (see [0059]; Sarkis: “The method 700 includes, at 702, receiving, at the electronic device, a selection of an object depicted in a two-dimensional (2D) image that is representative of a scene.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the co-pending application 18/361,632, Forster, and Kasten to include Sarkis’s features of determining a 3D content model for a virtual object is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete. Doing so would generate a more complete or more accurate 3D model in order to achieve higher resolution, improved color mapping, smooth textures, smooth edge. (Sarkis, [0004] and [0041]) This is a provisional nonstatutory double patenting rejection. Regarding Claim 2, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein generating the physical object includes applying the 3D model to a 3D printing process. (see [0076]; Forster: “In this way the digital 3D model is rebuilt as a physical model by the 3D printer.”) The same motivation to combine the co-pending application 18/361,632 and Forster a set forth for Claim 1 equally applies to Claim 2. Regarding Claim 3, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 2 above, Forster further teaches wherein generating the physical object includes exporting the 3D geometry of the virtual object to a slice file for the 3D printing process. (see [0076]; Forster: “The printer driver then generates thin slices of the 3D model one voxel thick for each layer in the y direction, and determines the x, z coordinates for each voxel in that layer.”) The same motivation to combine the co-pending application 18/361,632 and Forster a set forth for Claim 1 equally applies to Claim 3. Regarding Claim 5, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. (see [0116]-[0117]; Forster: “It will similarly be appreciated that images of the same scene from different viewpoints can be captured by different users at different times on different entertainment devices; providing a user has access to a pooled set of images (for example if they are posted to an online forum, or are stills extracted from a ‘fly-by’ video that moves or changes viewpoints, such as may be included in a trailer video for the videogame) then an equivalent set of two or more complementary viewpoints of the virtual environment may be obtained. Given these images and optionally associated metadata relating to the viewpoint position and direction, an entertainment device can analyse these images to generate 3-D model data”) The same motivation to combine the co-pending application 18/361,632 and Forster a set forth for Claim 1 equally applies to Claim 5. Regarding Claim 6, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein the virtual object is an avatar of a user of the video game. (see [0161]; Forster: “However, optionally the user may specify one or more objects in the environment for 3D printing alone; for example, the user may select to just print their avatar, or their avatar and an opponent.”) The same motivation to combine the co-pending application 18/361,632 and Forster a set forth for Claim 1 equally applies to Claim 6. Claims 7-9,11-15, and 17-18 contain similar limitations to those in claims 1-3 and 5-6 are rejected using the same rationale. Regarding Claim 19, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster teaches wherein capturing the 2D gameplay video generated from the session of a video game… (see [0185]; Forster: “the operating system of the host device may provide the user with access to encoded video recordings, either directly via the operating system interface or embedded within a user interface of the game. Because encoded video is much more compact than raw video, the host device may record for example 1, 3, 5, 10, or 15 minutes of displayed video in a rolling loop (depending on prior user settings).”) Sarkis further teaches wherein capturing the 2D …video generated …is after generating the notification indicating the 3D content model for the virtual object is incomplete. (see [0039]; Sarkis: “An anomaly detection unit 116 may analyze the 3D model to determine whether the 3D model includes an anomaly (e.g., a discontinuity of a surface, a missing or incomplete region, etc.).”. See [0040]: “The 3D model optimizer 118 or the anomaly detection unit 116 may cause the display 104 to present one or more selectable options to enable correction of the anomaly… The options may include an option to activate a refiner unit 120 to enable the system 100 to capture additional images in order to correct the anomaly.”) The same motivation to combine the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis a set forth for Claim 1 equally applies to Claim 19. Claims 4, 10, and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the copending Application No. 18/361,632 in view of Forster in view of Kasten in view Sarkis in view of Stevens et al. (US20170015057A1 -hereinafter Stevens). Regarding Claim 4, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 3 above; however, it does not explicitly teach wherein the slice file is an STL file. Stevens from the same or similar field of endeavor teaches wherein the slice file is an STL file. (see [0079]; Stevens: “A vertical slice through the distance field representation of the mesh 101 following application of step 402 is shown in FIG. 14.” See [0090]: “The new mesh is then saved in the present embodiment in an STL file format at step 1803, after which it is converted to GCode using a utility for such conversion of the known type.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis to include Stevens’s features of the slice file being an STL file. Doing so would convert files into a form suitable for 3D printing. (Stevens, [0004]) Claims 10 and 16 contain similar limitations to those in claim 4 are rejected using the same rationale. Claims 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the copending Application No. 18/361,632 in view of Forster in view of Kasten in view of Sarkis in view of Newell et al. (US20220353377A1 -hereinafter Newell). Regarding Claim 20, the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above; however, it does not explicitly teach wherein a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Newell from the same or similar field of endeavor teaches wherein a first device generates the 3D model of the virtual object (see [0043]; Newell: “the 3D printable model data has been generated based upon the sampled video image frames of the media content event by embodiments of the 3D model data generation system 100”), and a second, different device generates the physical object resembling the virtual object. (see [0043]; Newell: “The 3D printer 112, 114 may then manufacture a printed 3D object based on the generated 3D printable model data.” See [0002]: “Much like a printer prints a page of a document, a 3D printer “prints” or generates a physical 3D object that is a replica of a real-world physical object.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the co-pending application 18/361,632, Forster, Kasten, and Sarkis to include Newell’s features of a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Doing so would create accurate and reliable 3D model data on a user-selected physical object of interest. (Newell, [0082]) Claims 1-3, 5-9, 11-15, and 17-19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 6-7, 12-13, and 18 of copending Application No. 18/361,624 in view of Forster et al. (US20160314617A1 -hereinafter Forster) in view of Kasten et al. (US20240331280A1 -hereinafter Kasten) in view of Sarkis et al. (US20160005211A1 -hereinafter Sarkis). Instant application (18/361,641) Co-pending application (18/361,624) 1. A method for generating a physical object, comprising: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game; the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 1. A method for generating a three-dimensional (3D) content moment from a video game, comprising: capturing two-dimensional (2D) gameplay video generated from a session of a video game; analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video; and using the 3D geometry of the scene to generate a 3D video asset of a moment that occurred in the gameplay video; and storing the 3D video asset to a user account, wherein analyzing the 2D gameplay video comprises determining a texture, a shading, or a lighting of the scene, and said determined texture, shading, or lighting is incorporated in the 3D video asset. 7. A non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method for generating a physical object, said method including: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game; the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 7. A non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method for generating a three-dimensional (3D) content moment from a video game, said method including: capturing two-dimensional (2D) gameplay video generated from a session of a video game; analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video; using the 3D geometry of the scene to generate a 3D video asset of a moment that occurred in the gameplay video; and storing the 3D video asset to a user account, wherein analyzing the 2D gameplay video is further configured to determine a texture, shading or lighting of the scene, and wherein said determined texture, shading, or lighting is incorporated in the 3D video asset. 13. A system comprising at least one computing device, said at least one computing device configured to perform a method for generating a physical object, said method including: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game, the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 13. A system comprising at least one computing device, said at least one computing device configured to perform a method for generating a three-dimensional (3D) content moment from a video game, said method including: capturing two-dimensional (2D) gameplay video generated from a session of a video game; analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video; using the 3D geometry of the scene to generate a 3D video asset of a moment that occurred in the gameplay video; and storing the 3D video asset to a user account, wherein analyzing the 2D gameplay video comprises determining a texture, a shading, or a lighting of the scene, and said determined texture, shading, or lighting is incorporated in the 3D video asset. Regarding claim 1, claim 1 of the co-pending application 18/361,624 teaches all limitations of the instant application; however, claim 1 of the co-pending application 18/361,624 does not teach determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; the 2D gameplay video including a depiction of the virtual object; using the 3D model to generate a physical object resembling the virtual object. Forster from the same or similar field of endeavor teaches: the 2D gameplay video including a depiction of the virtual object; (see [0188]; Forster: “Alternatively the selection techniques described herein relating to camera viewpoint and area around a nominated character such as the user's avatar may be used for example to select a more complete scene, which could be arduous to manually select using a point-and-click interface.” See [0187]; Forster: “Hence as described above this may involve queueing and reviewing through recorded video of the game or re-rendered scenes of the game to identify a specific point in time comprising a scene or object of particular interest to the user.”) [The object/a nominated character reads on ‘a depiction of the virtual object’] using the 3D model to generate a physical object resembling the virtual object. (see [0137]; Forster: “The model is then sent to a 3D printer driver, which slices the model into layers from the bottom up. These layers are then successively printed by the 3D printer as described previously.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the co-pending application 18/361,624 to include Forster’s features of the 2D gameplay video including a depiction of the virtual object; using the 3D model to generate a physical object resembling the virtual object. Doing so would capture dynamically generated or animated 3D models, such as those found in videogames, at a particularly memorable or significant point in time during gameplay. (Forster, [0009]) However, it does not explicitly teach: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Kasten from the same or similar field of endeavor teaches: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; (see [0024]; Kasten: “An incomplete point cloud 101 of a chair is captured. The incomplete point cloud 101 comprises measurements that include a set of 3D input points P={p1, p2, . . . , pN} and a text description embedding y of the incomplete object. In an embodiment, P is captured by a depth sensor such as a depth camera or a LiDAR sensor, and internal parameters of the sensor are known.” See [0021]: “During creation of a virtual world or a game, a room may be generated that contains specific items by completing incomplete scans of objects such as furniture, weapons, etc.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the co-pending application 18/361,624 and Forster to include Kasten’s features of determining a 3D content model for a virtual object that is depicted in a video game is incomplete. Doing so would reconstruct a complete 3D model of an object and maintain consistent performance. (Kasten, [0057]) However, it does not explicitly teach: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Sarkis the same or similar field of endeavor teaches: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; (see [0039]; Sarkis: “If the anomaly detection unit 116 detects an anomaly in the 3D model, the anomaly detection unit 116 may cause the display 104 to display an indicator that identifies a location of the anomaly in the 3D model.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the co-pending application 18/361,624, Forster, and Kasten to include Sarkis’s features of in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete. Doing so would generate a more complete or more accurate 3D model in order to achieve higher resolution, improved color mapping, smooth textures, smooth edge. (Sarkis, [0004] and [0041]) This is a provisional nonstatutory double patenting rejection. Regarding Claim 2, the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein generating the physical object includes applying the 3D model to a 3D printing process. (see [0076]; Forster: “In this way the digital 3D model is rebuilt as a physical model by the 3D printer.”) The same motivation to combine the co-pending application 18/361,624 and Forster a set forth for Claim 1 equally applies to Claim 2. Regarding Claim 3, the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 2 above, Forster further teaches wherein generating the physical object includes exporting the 3D geometry of the virtual object to a slice file for the 3D printing process. (see [0076]; Forster: “The printer driver then generates thin slices of the 3D model one voxel thick for each layer in the y direction, and determines the x, z coordinates for each voxel in that layer.”) The same motivation to combine the co-pending application 18/361,624 and Forster a set forth for Claim 1 equally applies to Claim 3. Regarding Claim 5, the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. (see [0116]-[0117]; Forster: “It will similarly be appreciated that images of the same scene from different viewpoints can be captured by different users at different times on different entertainment devices; providing a user has access to a pooled set of images (for example if they are posted to an online forum, or are stills extracted from a ‘fly-by’ video that moves or changes viewpoints, such as may be included in a trailer video for the videogame) then an equivalent set of two or more complementary viewpoints of the virtual environment may be obtained. Given these images and optionally associated metadata relating to the viewpoint position and direction, an entertainment device can analyse these images to generate 3-D model data”) The same motivation to combine the co-pending application 18/361,624 and Forster a set forth for Claim 1 equally applies to Claim 5. Regarding Claim 6, the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein the virtual object is an avatar of a user of the video game. (see [0161]; Forster: “However, optionally the user may specify one or more objects in the environment for 3D printing alone; for example, the user may select to just print their avatar, or their avatar and an opponent.”) The same motivation to combine the co-pending application 18/361,624 and Forster a set forth for Claim 1 equally applies to Claim 6. Claims 7-9,11-15, and 17-18 contain similar limitations to those in claims 1-3 and 5-6 are rejected using the same rationale. Regarding Claim 19, the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster teaches wherein capturing the 2D gameplay video generated from the session of a video game… (see [0185]; Forster: “the operating system of the host device may provide the user with access to encoded video recordings, either directly via the operating system interface or embedded within a user interface of the game. Because encoded video is much more compact than raw video, the host device may record for example 1, 3, 5, 10, or 15 minutes of displayed video in a rolling loop (depending on prior user settings).”) Sarkis further teaches wherein capturing the 2D …video generated …is after generating the notification indicating the 3D content model for the virtual object is incomplete. (see [0039]; Sarkis: “An anomaly detection unit 116 may analyze the 3D model to determine whether the 3D model includes an anomaly (e.g., a discontinuity of a surface, a missing or incomplete region, etc.).”. See [0040]: “The 3D model optimizer 118 or the anomaly detection unit 116 may cause the display 104 to present one or more selectable options to enable correction of the anomaly… The options may include an option to activate a refiner unit 120 to enable the system 100 to capture additional images in order to correct the anomaly.”) The same motivation to combine the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis a set forth for Claim 1 equally applies to Claim 19. Claims 4, 10, and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the copending Application No. 18/361,624 in view of Forster in view of Kasten in view of Sarkis in view of Stevens et al. (US20170015057A1 -hereinafter Stevens). Regarding Claim 4, the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 3 above; however, it does not explicitly teach wherein the slice file is an STL file. Stevens from the same or similar field of endeavor teaches wherein the slice file is an STL file. (see [0079]; Stevens: “A vertical slice through the distance field representation of the mesh 101 following application of step 402 is shown in FIG. 14.” See [0090]: “The new mesh is then saved in the present embodiment in an STL file format at step 1803, after which it is converted to GCode using a utility for such conversion of the known type.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the co-pending application 18/361,624, Forster, Kasten, and Sarkis to include Stevens’s features of the slice file being an STL file. Doing so would convert files into a form suitable for 3D printing. (Stevens, [0004]) Claims 10 and 16 contain similar limitations to those in claim 4 are rejected using the same rationale. Claim 20 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the copending Application No. 18/361,624 in view of Forster in view of Kasten in view of Sarkis in view of in view of Newell et al. (US20220353377A1 -hereinafter Newell). Regarding Claim 20, the combination of the copending Application No. 18/361,624, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above; however, it does not explicitly teach wherein a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Newell from the same or similar field of endeavor teaches wherein a first device generates the 3D model of the virtual object (see [0043]; Newell: “the 3D printable model data has been generated based upon the sampled video image frames of the media content event by embodiments of the 3D model data generation system 100”), and a second, different device generates the physical object resembling the virtual object. (see [0043]; Newell: “The 3D printer 112, 114 may then manufacture a printed 3D object based on the generated 3D printable model data.” See [0002]: “Much like a printer prints a page of a document, a 3D printer “prints” or generates a physical 3D object that is a replica of a real-world physical object.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the copending Application No. 18/361,624, Forster, Kasten, and Sarkis to include Newell’s features of a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Doing so would create accurate and reliable 3D model data on a user-selected physical object of interest. (Newell, [0082]) Claims 1-3, 5-9, 11-15, and 17-19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 6-7, 12-13, and 18 of copending Application No. 18/361,608 in view of Forster et al. (US20160314617A1 -hereinafter Forster) in view of Kasten et al. (US20240331280A1 -hereinafter Kasten) in view of Sarkis et al. (US20160005211A1 -hereinafter Sarkis). Instant application (18/361,641) Co-pending application (18/361,608) 1. A method for generating a physical object, comprising: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game, the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. (Currently Amended) A method for generating a three-dimensional (3D) content from a video game, comprising: capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view; analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video; using the 3D geometry of the scene to generate a 3D video asset with a second point of view that occurred in the gameplay video; and storing the 3D video asset to a user account. 6. (Currently Amended) The method of claim 1, wherein analyzing the 2D gameplay video is further configured to determine a texture, shading or lighting of the scene, and wherein said determined texture, shading, or lighting is incorporated in the 3D video asset. 7. A non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method for generating a physical object, said method including: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game, the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 7. (Currently Amended) A non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method for generating a three-dimensional (3D) content from a video game, said method including: capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view; analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video; using the 3D geometry of the scene to generate a 3D video asset with a second point of view that occurred in the gameplay video; and storing the 3D video asset to a user account. 12. (Currently Amended) The non-transitory computer readable medium of claim 7, wherein analyzing the 2D gameplay video is further configured to determine a texture, shading or lighting of the scene, and wherein said determined texture, shading, or lighting is incorporated in the 3D video asset. 13. A system comprising at least one computing device, said at least one computing device configured to perform a method for generating a physical object, said method including: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; capturing two-dimensional (2D) gameplay video generated from a session of a video game, the 2D gameplay video including a depiction of the virtual object; analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; using the 3D geometry of the virtual object to generate a 3D model of the virtual object; storing the 3D model to a user account; and using the 3D model to generate a physical object resembling the virtual object. 13. (Currently Amended) A system comprising at least one computing device, said at least one computing device configured to perform a method for generating a three-dimensional (3D) content moment from a video game, said method including: capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view; analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video; using the 3D geometry of the scene to generate a 3D video asset with a second point of view of a moment that occurred in the gameplay video; and storing the 3D video asset to a user account. 18. (Currently Amended) The system of claim 13, wherein analyzing the 2D gameplay video is further configured to determine a texture, shading or lighting of the scene, and wherein said determined texture, shading, or lighting is incorporated in the 3D video asset. Regarding claim 1, claim 1 of the co-pending application 18/361,608 teaches all limitations of the instant application; however, claim 1 of the co-pending application 18/361,608 does not teach determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; the 2D gameplay video including a depiction of the virtual object; using the 3D model to generate a physical object resembling the virtual object. Forster from the same or similar field of endeavor teaches the 2D gameplay video including a depiction of the virtual object; (see [0188]; Forster: “Alternatively the selection techniques described herein relating to camera viewpoint and area around a nominated character such as the user's avatar may be used for example to select a more complete scene, which could be arduous to manually select using a point-and-click interface.” See [0187]; Forster: “Hence as described above this may involve queueing and reviewing through recorded video of the game or re-rendered scenes of the game to identify a specific point in time comprising a scene or object of particular interest to the user.”) [The object/a nominated character reads on ‘a depiction of the virtual object’] using the 3D model to generate a physical object resembling the virtual object. (see [0137]; Forster: “The model is then sent to a 3D printer driver, which slices the model into layers from the bottom up. These layers are then successively printed by the 3D printer as described previously.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the co-pending application 18/361,608 to include Forster’s features of using the 3D model to generate a physical object resembling the virtual object. Doing so would capture dynamically generated or animated 3D models, such as those found in videogames, at a particularly memorable or significant point in time during gameplay. (Forster, [0009]) However, it does not explicitly teach: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Kasten from the same or similar field of endeavor teaches: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; (see [0024]; Kasten: “An incomplete point cloud 101 of a chair is captured. The incomplete point cloud 101 comprises measurements that include a set of 3D input points P={p1, p2, . . . , pN} and a text description embedding y of the incomplete object. In an embodiment, P is captured by a depth sensor such as a depth camera or a LiDAR sensor, and internal parameters of the sensor are known.” See [0021]: “During creation of a virtual world or a game, a room may be generated that contains specific items by completing incomplete scans of objects such as furniture, weapons, etc.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of co-pending application 18/361,608 and Forster to include Kasten’s features of determining a 3D content model for a virtual object that is depicted in a video game is incomplete. Doing so would reconstruct a complete 3D model of an object and maintain consistent performance. (Kasten, [0057]) However, it does not explicitly teach: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Sarkis the same or similar field of endeavor teaches: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; (see [0039]; Sarkis: “If the anomaly detection unit 116 detects an anomaly in the 3D model, the anomaly detection unit 116 may cause the display 104 to display an indicator that identifies a location of the anomaly in the 3D model.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the co-pending application 18/361,608, Forster, and Kasten to include Sarkis’s features of in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete. Doing so would generate a more complete or more accurate 3D model in order to achieve higher resolution, improved color mapping, smooth textures, smooth edge. (Sarkis, [0004] and [0041]) This is a provisional nonstatutory double patenting rejection. Regarding Claim 2, the combination of the co-pending application 18/361,608, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein generating the physical object includes applying the 3D model to a 3D printing process. (see [0076]; Forster: “In this way the digital 3D model is rebuilt as a physical model by the 3D printer.”) The same motivation to combine the co-pending application 18/361,608 and Forster a set forth for Claim 1 equally applies to Claim 2. Regarding Claim 3, the combination of the co-pending application 18/361,608, Forster, Kasten, and Sarkis teaches all the limitations of claim 2 above, Forster further teaches wherein generating the physical object includes exporting the 3D geometry of the virtual object to a slice file for the 3D printing process. (see [0076]; Forster: “The printer driver then generates thin slices of the 3D model one voxel thick for each layer in the y direction, and determines the x, z coordinates for each voxel in that layer.”) The same motivation to combine the co-pending application 18/361,608 and Forster a set forth for Claim 1 equally applies to Claim 3. Regarding Claim 5, the combination of the co-pending application 18/361,608, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. (see [0116]-[0117]; Forster: “It will similarly be appreciated that images of the same scene from different viewpoints can be captured by different users at different times on different entertainment devices; providing a user has access to a pooled set of images (for example if they are posted to an online forum, or are stills extracted from a ‘fly-by’ video that moves or changes viewpoints, such as may be included in a trailer video for the videogame) then an equivalent set of two or more complementary viewpoints of the virtual environment may be obtained. Given these images and optionally associated metadata relating to the viewpoint position and direction, an entertainment device can analyse these images to generate 3-D model data”) The same motivation to combine the co-pending application 18/361,608 and Forster a set forth for Claim 1 equally applies to Claim 5. Regarding Claim 6, the combination of the co-pending application 18/361,608, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein the virtual object is an avatar of a user of the video game. (see [0161]; Forster: “However, optionally the user may specify one or more objects in the environment for 3D printing alone; for example, the user may select to just print their avatar, or their avatar and an opponent.”) The same motivation to combine the co-pending application 18/361,608 and Forster a set forth for Claim 1 equally applies to Claim 6. Claims 7-9,11-15, and 17-18 contain similar limitations to those in claims 1-3 and 5-6 are rejected using the same rationale. Regarding Claim 19, the combination of the co-pending application 18/361,608, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster teaches wherein capturing the 2D gameplay video generated from the session of a video game… (see [0185]; Forster: “the operating system of the host device may provide the user with access to encoded video recordings, either directly via the operating system interface or embedded within a user interface of the game. Because encoded video is much more compact than raw video, the host device may record for example 1, 3, 5, 10, or 15 minutes of displayed video in a rolling loop (depending on prior user settings).”) Sarkis further teaches wherein capturing the 2D …video generated …is after generating the notification indicating the 3D content model for the virtual object is incomplete. (see [0039]; Sarkis: “An anomaly detection unit 116 may analyze the 3D model to determine whether the 3D model includes an anomaly (e.g., a discontinuity of a surface, a missing or incomplete region, etc.).”. See [0040]: “The 3D model optimizer 118 or the anomaly detection unit 116 may cause the display 104 to present one or more selectable options to enable correction of the anomaly… The options may include an option to activate a refiner unit 120 to enable the system 100 to capture additional images in order to correct the anomaly.”) The same motivation to combine the combination of the co-pending application 18/361,608, Forster, Kasten, and Sarkis a set forth for Claim 1 equally applies to Claim 19. Claims 4, 10, and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the copending Application No. 18/361,608 in view of Forster in view of Kasten in Sarkis in view of Stevens et al. (US20170015057A1 -hereinafter Stevens). Regarding Claim 4, the combination of the co-pending application 18/361,608, Forster, and Sarkis teaches all the limitations of claim 3 above; however, it does not explicitly teach wherein the slice file is an STL file. Stevens from the same or similar field of endeavor teaches wherein the slice file is an STL file. (see [0079]; Stevens: “A vertical slice through the distance field representation of the mesh 101 following application of step 402 is shown in FIG. 14.” See [0090]: “The new mesh is then saved in the present embodiment in an STL file format at step 1803, after which it is converted to GCode using a utility for such conversion of the known type.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the combination of the co-pending application 18/361,608, Forster, and Sarkis to include Stevens’s features of the slice file being an STL file. Doing so would convert files into a form suitable for 3D printing. (Stevens, [0004]) Claims 10 and 16 contain similar limitations to those in claim 4 are rejected using the same rationale. Claim 20 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the copending Application No. 18/361,608 in view of Forster in view of Kasten in Sarkis in view of Newell et al. (US20220353377A1 -hereinafter Newell). Regarding Claim 20, the combination of the copending Application No. 18/361,608, Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above; however, it does not explicitly teach wherein a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Newell from the same or similar field of endeavor teaches wherein a first device generates the 3D model of the virtual object (see [0043]; Newell: “the 3D printable model data has been generated based upon the sampled video image frames of the media content event by embodiments of the 3D model data generation system 100”), and a second, different device generates the physical object resembling the virtual object. (see [0043]; Newell: “The 3D printer 112, 114 may then manufacture a printed 3D object based on the generated 3D printable model data.” See [0002]: “Much like a printer prints a page of a document, a 3D printer “prints” or generates a physical 3D object that is a replica of a real-world physical object.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of the copending Application No. 18/361,608, Forster, Kasten, and Sarkis to include Newell’s features of a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Doing so would create accurate and reliable 3D model data on a user-selected physical object of interest. (Newell, [0082]) Rejections based on Prior Art Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-9, 11-15, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Forster et al. (US20160314617A1 -hereinafter Forster) in view of Kasten et al. (US20240331280A1 -hereinafter Kasten) in view of Sarkis et al. (US20160005211A1 -hereinafter Sarkis). Regarding Claim 1, Forster teaches a method for generating a physical object, comprising: capturing two-dimensional (2D) gameplay video generated from a session of the video game (see [0187]; Forster: “Hence as described above this may involve queueing and reviewing through recorded video of the game or re-rendered scenes of the game to identify a specific point in time comprising a scene or object of particular interest to the user.”), the 2D gameplay video including a depiction of the virtual object; (see [0188]; Forster: “Alternatively the selection techniques described herein relating to camera viewpoint and area around a nominated character such as the user's avatar may be used for example to select a more complete scene, which could be arduous to manually select using a point-and-click interface.” See [0187]; Forster: “Hence as described above this may involve queueing and reviewing through recorded video of the game or re-rendered scenes of the game to identify a specific point in time comprising a scene or object of particular interest to the user.”) [The object/a nominated character reads on ‘a depiction of the virtual object’] analyzing the 2D gameplay video to identify the virtual object depicted in the 2D gameplay video; (see [0086]; Forster: “objects may exist in a predetermined relationship to each other without having physical connections (as exemplified by the character ‘Blobman’ in FIG. 3A, whose hands and feet are not physically attached to his torso in this figure), whilst other objects may be defined only in two dimensions within the three-dimensional environment, such as curtains, capes and in many cases environmental components such as walls.”) further analyzing the 2D gameplay video to determine 3D geometry of the virtual object; (see [0090]; Forster: “the supplementary printer geometry is defined by the game developer in a similar manner to conventional game geometry. For example, supplementary printer geometry for the character ‘Blobman’ may comprise rods 222A,B,C,D to connect the legs and arms to the character's torso, making the modified 3D character 220′ a contiguous body suitable for 3D printing”. See [0094]: “Similarly, supplementary printer geometry for 2D elements of the environment may be defined that has the thickness needed to provide adequate structural support for the environmental feature when 3D printed.”) using the 3D geometry of the virtual object to generate a 3D model of the virtual object; (see [0102]; Forster: “a 3D model is constructed for 3D printing using these rendered images in preference to the potentially disparate in-game representations of the virtual environment geometry.” See [0192]: “Finally, a seventh step s870 comprises generating, responsive to the retrieved respective values, a model of the selected at least part of the rendered virtual environment that is configured for 3D printing.”) storing the 3D model to a user account (see [0167]; Forster: “In the first instance local printer drivers will generate drawing lists that may be sent securely to a central print queue server, together with meta data relating to the postal address of the user… In either of these cases, printing of the model may be contingent upon the payment of a fee, for example via a payment card registered with the entertainment device's network, or similarly may be contingent upon the receipt of a voucher which might be earned for example as a trophy or other in-game award, or as part of the purchase price of a game, entitling the user to the creation of a predetermined number of 3D models from that game.” See [0062]: “The entertainment device network account may be set up to include the user's real name and optionally other personal details, bank details for online payments, an indication of whether the current entertainment device is the primary entertainment device associated with the user account, and the ability to selectively transfer licenses between entertainment devices where the user account is associated.”); and using the 3D model to generate a physical object resembling the virtual object. (see [0137]; Forster: “The model is then sent to a 3D printer driver, which slices the model into layers from the bottom up. These layers are then successively printed by the 3D printer as described previously.”) However, Forster does not explicitly teach: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Kasten from the same or similar field of endeavor teaches: determining a 3D content model for a virtual object that is depicted in a video game is incomplete; (see [0024]; Kasten: “An incomplete point cloud 101 of a chair is captured. The incomplete point cloud 101 comprises measurements that include a set of 3D input points P={p1, p2, . . . , pN} and a text description embedding y of the incomplete object. In an embodiment, P is captured by a depth sensor such as a depth camera or a LiDAR sensor, and internal parameters of the sensor are known.” See [0021]: “During creation of a virtual world or a game, a room may be generated that contains specific items by completing incomplete scans of objects such as furniture, weapons, etc.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Forster to include Kasten’s features of determining a 3D content model for a virtual object that is depicted in a video game is incomplete. Doing so would reconstruct a complete 3D model of an object and maintain consistent performance. (Kasten, [0057]) However, it does not explicitly teach: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; Sarkis the same or similar field of endeavor teaches: in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete; (see [0039]; Sarkis: “If the anomaly detection unit 116 detects an anomaly in the 3D model, the anomaly detection unit 116 may cause the display 104 to display an indicator that identifies a location of the anomaly in the 3D model.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Forster and Kasten to include Sarkis’s features of in response to determining the 3D content model for the virtual object is incomplete, generating a notification in a user interface indicating the 3D content model for the virtual object is incomplete. Doing so would generate a more complete or more accurate 3D model in order to achieve higher resolution, improved color mapping, smooth textures, smooth edge. (Sarkis, [0004] and [0041]) Regarding Claim 2, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein generating the physical object includes applying the 3D model to a 3D printing process. (see [0076]; Forster: “In this way the digital 3D model is rebuilt as a physical model by the 3D printer.”) Regarding Claim 3, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 2 above, Forster further teaches wherein generating the physical object includes exporting the 3D geometry of the virtual object to a slice file for the 3D printing process. (see [0076]; Forster: “The printer driver then generates thin slices of the 3D model one voxel thick for each layer in the y direction, and determines the x, z coordinates for each voxel in that layer.”) Regarding Claim 5, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein determining 3D geometry of the virtual object includes tracking the virtual object across a sequence of video frames from the 2D gameplay video. (see [0116]-[0117]; Forster: “It will similarly be appreciated that images of the same scene from different viewpoints can be captured by different users at different times on different entertainment devices; providing a user has access to a pooled set of images (for example if they are posted to an online forum, or are stills extracted from a ‘fly-by’ video that moves or changes viewpoints, such as may be included in a trailer video for the videogame) then an equivalent set of two or more complementary viewpoints of the virtual environment may be obtained. Given these images and optionally associated metadata relating to the viewpoint position and direction, an entertainment device can analyse these images to generate 3-D model data”) Regarding Claim 6, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster further teaches wherein the virtual object is an avatar of a user of the video game. (see [0161]; Forster: “However, optionally the user may specify one or more objects in the environment for 3D printing alone; for example, the user may select to just print their avatar, or their avatar and an opponent.”) Regarding Claim 7, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 1. Regarding Claim 8, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 2. Regarding Claim 9, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 3. Regarding Claim 11, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 5. Regarding Claim 12, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 6. Regarding Claim 13, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 1. Regarding Claim 14, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 2. Regarding Claim 15, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 3. Regarding Claim 17, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 5. Regarding Claim 18, the limitations in this claim is taught by the combination of Forster, Kasten, and Sarkis as discussed connection with claim 6. Regarding Claim 19, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above, Forster teaches wherein capturing the 2D gameplay video generated from the session of a video game… (see [0185]; Forster: “the operating system of the host device may provide the user with access to encoded video recordings, either directly via the operating system interface or embedded within a user interface of the game. Because encoded video is much more compact than raw video, the host device may record for example 1, 3, 5, 10, or 15 minutes of displayed video in a rolling loop (depending on prior user settings).”) Sarkis further teaches wherein capturing the 2D …video generated …is after generating the notification indicating the 3D content model for the virtual object is incomplete. (see [0039]; Sarkis: “An anomaly detection unit 116 may analyze the 3D model to determine whether the 3D model includes an anomaly (e.g., a discontinuity of a surface, a missing or incomplete region, etc.).”. See [0040]: “The 3D model optimizer 118 or the anomaly detection unit 116 may cause the display 104 to present one or more selectable options to enable correction of the anomaly… The options may include an option to activate a refiner unit 120 to enable the system 100 to capture additional images in order to correct the anomaly.”) The same motivation to combine Forster, Kasten, and Sarkis a set forth for Claim 1 equally applies to Claim 19. Claim(s) 4, 10, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Forster in view of Kasten in view of Sarkis in view of Stevens et al. (US20170015057A1 -hereinafter Stevens). Regarding Claim 4, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 3 above; however, it does not explicitly teach wherein the slice file is an STL file. Stevens from the same or similar field of endeavor teaches wherein the slice file is an STL file. (see [0079]; Stevens: “A vertical slice through the distance field representation of the mesh 101 following application of step 402 is shown in FIG. 14.” See [0090]: “The new mesh is then saved in the present embodiment in an STL file format at step 1803, after which it is converted to GCode using a utility for such conversion of the known type.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of Forster, Kasten, and Sarkis to include Stevens’s features of the slice file being an STL file. Doing so would convert files into a form suitable for 3D printing. (Stevens, [0004]) Regarding Claim 10, the limitations in this claim is taught by Forster, Kasten, Sarkis, and Stevens as discussed connection with claim 4. Regarding Claim 16, the limitations in this claim is taught by Forster, Kasten, Sarkis, and Stevens as discussed connection with claim 4. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Forster in view of Kasten in view of Sarkis in view of Newell et al. (US20220353377A1 -hereinafter Newell). Regarding Claim 20, the combination of Forster, Kasten, and Sarkis teaches all the limitations of claim 1 above; however, it does not explicitly teach wherein a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Newell from the same or similar field of endeavor teaches wherein a first device generates the 3D model of the virtual object (see [0043]; Newell: “the 3D printable model data has been generated based upon the sampled video image frames of the media content event by embodiments of the 3D model data generation system 100”), and a second, different device generates the physical object resembling the virtual object. (see [0043]; Newell: “The 3D printer 112, 114 may then manufacture a printed 3D object based on the generated 3D printable model data.” See [0002]: “Much like a printer prints a page of a document, a 3D printer “prints” or generates a physical 3D object that is a replica of a real-world physical object.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of the combination of Forster, Kasten, and Sarkis to include Newell’s features of a first device generates the 3D model of the virtual object, and a second, different device generates the physical object resembling the virtual object. Doing so would create accurate and reliable 3D model data on a user-selected physical object of interest. (Newell, [0082]) Response to Arguments Applicant’s arguments with respect to the claim rejection(s) of the independent claim(s) have been fully considered and are persuasive because of the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mackowiak (US20160132275A1) discloses a game system having the abilities to link and interact with a 3D printer, as well as to manipulate and edit digital aspects of the game in relation with 3D printed objects. Pleiman (US12417262B2) discloses receiving a request from the user to 3D print the limited-edition virtual object. Wang (US9773302B2) discloses generating a model of a 3D object from a 2D image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VI N TRAN whose telephone number is (571)272-1108. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROBERT FENNEMA can be reached at (571) 272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.N.T./Examiner, Art Unit 2117 /ROBERT E FENNEMA/Supervisory Patent Examiner, Art Unit 2117
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Sep 12, 2025
Non-Final Rejection — §103, §DP
Dec 11, 2025
Response Filed
Dec 29, 2025
Final Rejection — §103, §DP
Jan 27, 2026
Interview Requested
Feb 05, 2026
Applicant Interview (Telephonic)
Feb 05, 2026
Examiner Interview Summary
Mar 02, 2026
Response after Non-Final Action
Mar 24, 2026
Request for Continued Examination
Mar 26, 2026
Response after Non-Final Action
Mar 28, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12528200
LIGHT FOR TEACH PENDANT AND/OR ROBOT
2y 5m to grant Granted Jan 20, 2026
Patent 12523972
Event Engine for Building Management System Using Distributed Devices and Blockchain Ledger
2y 5m to grant Granted Jan 13, 2026
Patent 12525808
TIME-SHIFTING OPTIMIZATIONS FOR RESOURCE GENERATION AND DISPATCH
2y 5m to grant Granted Jan 13, 2026
Patent 12494653
CONTROLLING A HYBRID POWER PLANT
2y 5m to grant Granted Dec 09, 2025
Patent 12467818
DETECTING GAS LEAKS FROM IMAGE DATA AND LEAK DETECTION MODELS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
83%
With Interview (+36.3%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month