Prosecution Insights
Last updated: April 19, 2026
Application No. 18/739,852

SYSTEMS AND METHODS FOR USE IN FILMING

Non-Final OA §102§103
Filed
Jun 11, 2024
Examiner
LI, JAI WEI TOMMY
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Fd Ip & Licensing LLC
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
9 currently pending
Career history
9
Total Applications
across all art units

Statute-Specific Performance

§103
46.2%
+6.2% vs TC avg
§102
53.9%
+13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned: Paragraph 57, contained information related to a network 253, as shown in FIG. 2, however, no network 253 labeling was discovered on FIG. 2. Paragraph 77-78, contained even and odd video frame cache locations 346-348 however, no label 347 has been discovered. Paragraph 83, contained even and odd video cache locations 338-340 however, no label 339 has been discovered. Paragraph 137, contained the digital engine 450, as shown in FIG. 5 however, no label of the digital engine 450 on either FIG. 4, or FIG. 5 exist. Paragraph 153, contained the digital asset 208, as shown on FIG. 1 however, no label of the digital asset 208 on either FIG. 1 or FIG. 2 exist. Paragraph 165, contained remote memory storage device 1354 however, no label 1354 has been discovered. Paragraph 168, contained the devices 1404-1408 however, no label for 1403, 1405, and 1407 has been discovered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters: Paragraph 162, mentions a hard disk drive interface 1326, a magnetic disk drive interface 1328, and an optical drive interface 1330 that have been used to designate drive interface 1326, drive interface 1328, drive interface 1330. Paragraph 168, mentions cloud computing nodes 1402 and a computing nodes 1402 that have been used to designate Node 1402. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following typographic error(s) in paragraph 41 Incorrect labeling, prop device 1204 should be prop device 104. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 46: Incorrect labeling, a computing system 1300 should be a computer 1300. Appropriate correction is required. Incorrect labeling, scene related information frames 214 should be a scene related frames 214. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 48-49, 57-58: Incorrect labeling, scene related information frames 214 should be a scene related frames 214. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 51: Incorrect labeling, scene related information frames 214 should be a scene related frames 214. Appropriate correction is required. Incorrect labeling, selected scene related information frame 228, as shown in FIG. 1 should be a selected scene related frame 228, as shown in FIG. 2. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 54-55: Incorrect labeling, selected scene related information frame 228, should be a selected scene related frame 228. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 64-65, 70, 77-80, 83, 85, 89, 91, 101: Incorrect labeling, GPU 318, should be graphics processing unit (GPU) 318. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 66, 106, 110: Incorrect labeling, video camera 406, should be camera 406. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 67, 111: Incorrect labeling, scene related information frame 334 should be a scene related frame 334. Appropriate correction is required. Incorrect labeling, scene related information frames 214 should be a scene related frames 214. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 68, 82, 86, 92-94, 96, 102: Incorrect labeling, scene related information frame 334 should be a scene related frame 334. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 69: Incorrect labeling, GPU 318, should be graphics processing unit (GPU) 318. Appropriate correction is required. Incorrect labeling, memory 339, should be system memory 339. Appropriate correction is required. Incorrect labeling, system bus 330, should be bus 330. Appropriate correction is required. Incorrect labeling, scene related information frame 334 should be a scene related frame 334. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 71, 87-88, 90, 95: Incorrect labeling, scene related information frame 334 should be a scene related frame 334. Appropriate correction is required. Incorrect labeling, GPU 318, should be graphics processing unit (GPU) 318. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 84: Incorrect labeling, GPU 318, should be graphics processing unit (GPU) 318. Appropriate correction is required. Incorrect labeling, memory 339, should be system memory 339. Appropriate correction is required. Incorrect labeling, scene related information frame 334 should be a scene related frame 334. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 105: Incorrect labeling, rig tracking data 414, should be rig data 414. Appropriate correction is required. Incorrect labeling, video camera 406, should be camera 406. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 112: Incorrect labeling, rig tracking data 414, should be rig data 414. Appropriate correction is required. Incorrect labeling, the network 353, as shown in FIG. 2 should be the network 353, as shown in FIG. 3. Appropriate correction is required. Incorrect labeling, video camera 406, should be camera 406. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 113: Incorrect labeling, rig tracking data 414, should be rig data 414. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 115: Incorrect labeling, 3D model 442 should be recreated 3D model 442. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 132: Incorrect labeling, texture map component 616 should be texture map 616. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 133: Incorrect labeling, light and shader component 618 should be , light and shader 618. Appropriate correction is required. Incorrect labeling, the shader component 618 should be , light and shader 618. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 132: Incorrect labeling, texture map component 616 should be texture map 616. Appropriate correction is required. Incorrect labeling, light and shader component 618 should be , light and shader 618. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 151: Incorrect labeling, the compositing engine 200, as shown in FIG. 1 should be the compositing engine 200, as shown in FIG. 2. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 153: Incorrect labeling, scene related information frames 214, as shown on FIG. 2 should be a scene related frames 214, as shown on FIG. 2. Appropriate correction is required. Incorrect labeling, the digital asset 208, as shown on FIG. 1 should be a scene related frames 214, as shown on FIG. 2. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 156: Incorrect labeling, the diorama pipeline 500 should be the diorama pipeline 600. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 160: Incorrect labeling, a computing system 1300 should be a computer 1300. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 161: Incorrect labeling, a processing unit 1302, should be a processor 1302. Appropriate correction is required. Incorrect labeling, a memory 1304 should be system memory 1304. Appropriate correction is required. Incorrect labeling, a system bus 1306 should be bus 1306. Appropriate correction is required. Incorrect labeling, a basic input/output system (BIOS) 1314should be BIOS 1314. Appropriate correction is required. Incorrect labeling, a computing system 1300 should be a computer 1300. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 162: Incorrect labeling, a computing system 1300 should be a computer 1300. Appropriate correction is required. Incorrect labeling, GPU 318, should be graphics processing unit (GPU) 318. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 163: Incorrect labeling, the program data 1332 should be operating system 1332. Appropriate correction is required. Incorrect labeling, GPU 318, should be graphics processing unit (GPU) 318. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 164: Incorrect labeling, processing unit 1302, should be a processor 1302. Appropriate correction is required. Incorrect labeling, port interface 1342 , should be an interface/bus 1342. Appropriate correction is required. Incorrect labeling, output device 1344 should be output 1344. Appropriate correction is required. Incorrect labeling, system bus 1306 should be bus 1306. Appropriate correction is required. Incorrect labeling, interface 1346 should be interface/adapter 1346. Appropriate correction is required. The disclosure is objected to because of the following typographic error(s) in paragraph 165: Incorrect labeling, a computing system 1300 should be a computer 1300. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-6, and, 8-10 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Thurston, III et al. (U.S. Pub. No. 2023/0186550) Regarding claim 1, Thurston discloses a computer-implemented method comprising (paragraph 11, line(s) 1-2 "computer-implemented method"): providing a scene model that is a virtual representation of a scene based on depth and color data captured for the scene (paragraph 9, line(s) 9-11 "One general aspect includes a computer-implemented method of generating a virtual scene rendering usable in a captured scene"; also, paragraph 9, line(s) 11-14 "a camera position of a camera, light sensor, image capture device, etc., in a stage environment"; also, paragraph 9, line(s) 23-24 "determining a depth value for a given virtual scene element"; also, paragraph 12, line(s) 40-42 "In some implementations, the first calibration image includes shapes of varying sizes, line weights, or colors"); creating a miniaturized version of the scene model corresponding to a diorama of the scene (paragraph 9, line(s) 19-21 "the virtual scene to be presented on the virtual scene display while the camera captures imagery of the stage environment"); setting a virtual camera with respect to the scene model to provide a perspective view of the diorama (paragraph 13, line(s) 1-6 " a camera position of a camera in a stage environment that is to be used to capture the captured scene; determining a display position of a virtual scene display in the stage environment"); and causing the diorama of the scene to be outputted at the perspective view on an output device (paragraph 114, line(s) 1-11 "During or following the capture of a live action scene, live action capture system 1202 might output live action footage to a live action footage storage 1220. A live action processing system 1222 might process live action footage to generate data about that live action footage and store that data into a live action metadata storage 1224. Live action processing system 1222 might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices"). Regarding claim 2, Thurston discloses the computer-implemented method of claim 1, wherein said causing comprises generating augmented video data comprising one or more composited video frames with the diorama (paragraph 37, line(s) 6-14 "This image may be entirely or partially computer generated and/or animated, may be captured earlier from a live action scene, or a combination thereof. The precursor image may be a single image displayed on the display wall or may be a sequence of images, such as frames of a video or animation. The precursor image may include precursor metadata for computer generated imagery"). Regarding claim 3, Thurston discloses the computer-implemented method of claim 2, wherein said generating augmented video data comprises compositing one or more video frames provided by a video camera and the diorama to provide the augmented video data (paragraph 37, line(s) 6-14 "This image may be entirely or partially computer generated and/or animated, may be captured earlier from a live action scene, or a combination thereof. The precursor image may be a single image displayed on the display wall or may be a sequence of images, such as frames of a video or animation. The precursor image may include precursor metadata for computer generated imagery"). Regarding claim 4, Thurston discloses the computer-implemented method of claim 2, wherein the virtual camera is further set based on camera viewpoint data for the video camera to specify the perspective view of the diorama (paragraph 64, line(s) 1-6 "A virtual scene (e.g., virtual scene 106 of FIG. 1) is a scene described by computer-readable data structures that may include virtual scene elements (e.g., virtual objects 180 and 190 of FIG. 1), lighting information, one or more virtual camera viewpoints, and one or more virtual cameras view frame"). Regarding claim 5, Thurston discloses the computer-implemented method of claim 4, the computer-implemented method of claim 4, wherein the perspective view includes a top down view, an oblique view, or a slanted view (paragraph 6, line(s) 6-9 "stereoscopic imaging may be used to capture scenes as they would be viewed from different angles, and therefore add depth and 3D elements to the captured images and video"). Regarding claim 6, Thurston discloses the computer-implemented method of claim 4, wherein the camera is a first video camera, the output device is a first output device, the virtual camera is a first virtual camera, and the perspective view is a first perspective view, and the method further comprising: setting a second virtual camera in the virtual environment with respect to the scene model to provide a second perspective view of the diorama based on camera viewpoint data for a second video camera (paragraph 71, line(s) 3-10 "a virtual scene 106, in accordance with at least one implementation of the present disclosure. Visible are first physical object 160, second physical object 170, first virtual object 180, second virtual object 190, first camera 120a, second camera 120b, first viewing frustum 130a, second viewing frustum 130b, first virtual viewing frustum 140a, and second virtual viewing frustum 140b"). Regarding claim 8, Thurston discloses the computer-implemented method of claim 2, further comprising inserting one or more digital assets into the scene model representative of digital assets to be used in the scene (paragraph 3, line(s) 12-21 "objects that may be placed in a background scene and/or with a live action scene can comprise many individual objects, which may have their own lighting effects, colors, and/or interactions with live actors. For example, a scene involving an explosion or other intense light may have features that cause colors to be projected onto live actors. Background scenes may also involve stage elements and/or creatures that interact with live actors, such as by acting as an environment and/or engaging with live actors"). Regarding claim 9, Thurston discloses the computer-implemented method of claim 8, wherein the one or more digital assets is a first digital asset, the method further comprising insert a second digital asset representative of an actor into the scene model, wherein movements of the second digital asset in the scene model are synced to movements of the actor (paragraph 3, line(s) 12-21 "objects that may be placed in a background scene and/or with a live action scene can comprise many individual objects, which may have their own lighting effects, colors, and/or interactions with live actors. For example, a scene involving an explosion or other intense light may have features that cause colors to be projected onto live actors. Background scenes may also involve stage elements and/or creatures that interact with live actors, such as by acting as an environment and/or engaging with live actors"; also, paragraph 71, line(s) 3-10 "a virtual scene 106, in accordance with at least one implementation of the present disclosure. Visible are first physical object 160, second physical object 170, first virtual object 180, second virtual object 190, first camera 120a, second camera 120b, first viewing frustum 130a, second viewing frustum 130b, first virtual viewing frustum 140a, and second virtual viewing frustum 140b"). Regarding claim 10, Thurston discloses the computer-implemented method of claim 1, wherein the diorama is animated to provide a visual representation of the scene (paragraph 12, line(s) 29-53"In some implementations, the first calibration image includes at least one of lines, circles, polygons, and/or photographic images. In some implementations, the first calibration image includes shapes of varying sizes, line weights, or colors. In some implementations, the first calibration image is two-dimensional. In some implementations, the first calibration image includes two-dimensional elements at different depths or plane orientations. In some implementations, the first calibration image is three-dimensional. In some implementations, the first calibration image is positioned in the virtual scene at a depth of a surface of the virtual scene display. In some implementations, the first calibration image is positioned in the virtual scene at a depth different than a surface of the virtual scene display. In some implementations, the first calibration image is animated."). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7, and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thurston, III et al. (U.S. Pub. No. 2023/0186550) in view of Major et al. (U.S. Pub. No. 2021/0350634). Regarding claim 7, Thurston discloses the computer-implemented method of claim 6, and further discloses the first and second output devices (paragraph 71, line(s) 3-10 "a virtual scene 106, in accordance with at least one implementation of the present disclosure. Visible are first physical object 160, second physical object 170, first virtual object 180, second virtual object 190, first camera 120a, second camera 120b, first viewing frustum 130a, second viewing frustum 130b, first virtual viewing frustum 140a, and second virtual viewing frustum 140b"). Thurston does not disclose the first and second output devices being a mobile device. However, in a similar field of endeavor, Major discloses the first and second output devices are mobile devices by indicating that the cameras could be located on mobile devices (paragraph 76, line(s) 16-20 "output devices 630 may include various output subsystems, such as one or more displays, speakers, and/or the like. Other components may be similarly coupled to and/or otherwise implemented in computer system; also, paragraph 75, line(s) 4-6 "the viewable model generator 160, may be implemented using one or more instances of the computer system"; also, paragraph 18, line(s) 8- 9"the camera 110 could be located on a mobile device"; also, paragraph 18, line(s) 14-15 "viewable model generator 160 may both be executed on a mobile device that includes the camera "). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Thurston's invention of the method defined in claim 7, the first and second output devices with the features of the cameras being located on the mobile device. As demonstrated by Major, one could specify the use of cameras that are located on a mobile device. Regarding claim 11, Thurston discloses a computer-implemented method (paragraph 11, line(s) 1-2 "computer-implemented method") comprising: receiving waypoint instructions identifying virtual points for a digital asset for use in a scene; updating a scene model to include the virtual points at locations in the scene model corresponding to locations in the scene; providing augmented video data comprising one or more composited video frames with the virtual points in the scene (paragraph 64, line(s) 1-6 "A virtual scene (e.g., virtual scene 106 of FIG. 1) is a scene described by computer-readable data structures that may include virtual scene elements (e.g., virtual objects 180 and 190 of FIG. 1), lighting information, one or more virtual camera viewpoints, and one or more virtual cameras view frame"); and causing the augmented video data to be rendered on an output device (paragraph 114, line(s) 1-11 "During or following the capture of a live action scene, live action capture system 1202 might output live action footage to a live action footage storage 1220. A live action processing system 1222 might process live action footage to generate data about that live action footage and store that data into a live action metadata storage 1224. Live action processing system 1222 might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices"). Thurston does not disclose receiving waypoint instructions identifying virtual points for a digital asset for use in a scene, one or more composited video frames with the virtual point, and the scene based waypoint scene mode. However, in a similar field of endeavor, Major discloses receiving waypoint instructions identifying virtual points for a digital asset for use in a scene (paragraph 41, line(s) 1-4 "Some embodiments may allow visual or interactive enhancements to be added to the images 116. For example, some embodiments may include “hotspots” or “sprites” that may act as controls when presented in a 2D interface"; also, paragraph 41, line(s) 4-7 "FIG. 3 illustrates two hotspots that have been added as part of the virtual object 300. Hotspot 350 may be placed on a front surface of the virtual object 300, while hotspot 352 may be placed on a top surface of the virtual object 300"); It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Thurston's invention of a computer-implemented method, updating a scene model to include the virtual points at locations in the scene model corresponding to locations in the scene to provide a waypoint scene model; providing augmented video data comprising one or more composited video frames with the virtual points in the scene based waypoint scene mode, and causing the augmented video data to be rendered on an output device with the features of receiving waypoint instructions identifying virtual points for a digital asset for use in a scene, the composite video frames with the virtual points in the scene based waypoint scene mode. As demonstrated by Major, one could add in the support for waypoint-based instructions that identify virtual points for a virtual object wherein, said virtual objects that contains the virtual points will also be included in one or more composite video frame and displaying said virtual points from the augmented video data in a scene-based waypoint scene mode. Regarding claim 12, Thurston as modified by Major discloses the computer-implemented method of claim 11, further comprising receiving animation instructions identifying an animation of a digital asset between neighboring virtual points of the virtual points, wherein the waypoint scene model is provided with data specifying the animation of the digital asset between the neighboring virtual points (Major: paragraph 29, line(s) 1-13"Other scripts may perform a sequence of movements or animations of the virtual object 210 that extend beyond simple rotations and movements. These scripts may move individual components of the virtual object 210. Examples of movements that can be captured in a set of images to form a viewable model include: opening and closing a door of a virtual vehicle, creating an exploded view showing different parts of a virtual object (e.g., to show how the real object is assembled), manipulating a virtual chair between folded and unfolded states, moving a virtual train along a physical train track or physical model of a train track, and rotating a virtual object about one or more axes of rotation."; also, paragraph 46, line(s) 1-15 "FIG. 4 illustrates an example of a virtual object 400 being manipulated, according to certain embodiments. In the example of FIG. 4, the virtual object includes a part 410 that is manipulated to gradually slide out of an opening 408 in the virtual object 400. The movement depicted in FIG. 4 can be captured as a set of images (e.g., the images 116) to form a viewable model that shows the part 410 sliding out of the opening 408. For example, the viewable model may be displayed as an interactive presentation (e.g., an animation controlled in a similar manner to a slide image) or a non-interactive video. FIG. 4 is a simple example. In practice, a virtual object can include numerous parts that can be manipulated in different ways. For instance, a virtual object may include parts that freely rotate, parts that swivel or pivot about a fixed point, parts that interlock, and so on"). Regarding claim 13, Thurston as modified by Major discloses the computer-implemented method of claim 12. wherein said providing comprises generating the augmented video data with composited frames with the virtual points in the scene and the digital asset between the neighboring virtual points (Thurston: paragraph 64, line(s) 1-6 "A virtual scene (e.g., virtual scene 106 of FIG. 1) is a scene described by computer-readable data structures that may include virtual scene elements (e.g., virtual objects 180 and 190 of FIG. 1), lighting information, one or more virtual camera viewpoints, and one or more virtual cameras view frame"). Thurston does not disclose the composited frames with the virtual points in the scene and the digital asset between the neighboring virtual points. However, in a similar field of endeavor, Major discloses composited frames with the virtual points in the scene and the digital asset between the neighboring virtual points (paragraph 41, line(s) 1-4 "Some embodiments may allow visual or interactive enhancements to be added to the images 116. For example, some embodiments may include “hotspots” or “sprites” that may act as controls when presented in a 2D interface"; also, paragraph 45, line(s) 1-24 "As images are rendered using the method described above, these images may include a list of coordinates or regions in the 2D images associated with the hotspots 350, 352. For example, one of the 2D images depicting the virtual object 300 may include coordinate locations or regions that include hotspots 350 and 352. These coordinate locations or regions may be stored in a table with corresponding actions. For example, the table may include coordinates or regions for hotspot 350, along with a URL to be displayed in a browser, text to be displayed in a pop-up window, functions to be called in a function library, and/or any other link or description of one or more of the actions described above. When the rendered 2D images 116 are transmitted as part of the viewable model 118, the viewable model 118 may include coordinate locations of the hotspots 350, 352. When a corresponding spin image is displayed for a user, the hotspots 350, 352 may be visible as part of the rendered 2D images that are displayed sequentially as the spin image is rotated. When the user hovers over, clicks on, or otherwise selects a region in the spin image that includes one of the hotspots 350, 352, the server may determine that the user selection falls within the coordinates or region corresponding to that hotspot. The server may then execute the corresponding action associated with that hotspot."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Thurston's invention of a computer-implemented method of claim 12, generating the augmented video data with composited frames in the scene with the features of the composite frames with the virtual points in the scene and the digital asset between the neighboring virtual points. As demonstrated by Major, one could add in the support for composited frames in a scene with the virtual points and virtual object between the neighboring virtual spots. Regarding claim 14, Thurston as modified by Major discloses the computer-implemented method of claim 13, wherein said causing comprises causing the augmented video to be rendered on the output device to provide a visual animation of the digital asset at and/or between the neighboring points based on the waypoint scene model (Major: paragraph 6, line(s) 2-3 "a viewable model generated according to the AR techniques described herein can be output as any form of animation or as a still image"; also, paragraph 43, line(s) 1-2 "As the virtual object 300 is rendered as part of the AR scene"; also, paragraph 44, line(s) 10-12 "user hovers over or clicks on the rendered image of the hotspot 350 in the AR scene that is displayed on the user device"; also, paragraph 46, line(s) 1-15 "FIG. 4 illustrates an example of a virtual object 400 being manipulated, according to certain embodiments. In the example of FIG. 4, the virtual object includes a part 410 that is manipulated to gradually slide out of an opening 408 in the virtual object 400. The movement depicted in FIG. 4 can be captured as a set of images (e.g., the images 116) to form a viewable model that shows the part 410 sliding out of the opening 408. For example, the viewable model may be displayed as an interactive presentation (e.g., an animation controlled in a similar manner to a slide image) or a non-interactive video. FIG. 4 is a simple example. In practice, a virtual object can include numerous parts that can be manipulated in different ways. For instance, a virtual object may include parts that freely rotate, parts that swivel or pivot about a fixed point, parts that interlock, and so on"). Regarding claim 15, Thurston as modified by Major discloses the computer-implemented method of claim 11, further comprising: creating a miniaturized version of the waypoint scene model corresponding to a diorama of the scene (Thurston: paragraph 9, line(s) 19-21 "the virtual scene to be presented on the virtual scene display while the camera captures imagery of the stage environment"); and setting a virtual camera with respect to the scene model to provide a perspective view of the diorama (Thurston: paragraph 13, line(s) 1-6 " a camera position of a camera in a stage environment that is to be used to capture the captured scene; determining a display position of a virtual scene display in the stage environment"). Regarding claim 16, Thurston as modified by Major discloses the computer-implemented method of claim 15, wherein said augmented video data is provided with the one or more composited video frames with the diorama (Thurston: paragraph 37, line(s) 6-14 "This image may be entirely or partially computer generated and/or animated, may be captured earlier from a live action scene, or a combination thereof. The precursor image may be a single image displayed on the display wall or may be a sequence of images, such as frames of a video or animation. The precursor image may include precursor metadata for computer generated imagery"). Regarding claim 17, Thurston as modified by Major discloses the computer-implemented method of claim 16, wherein said providing augmented video data comprises compositing one or more video frames provided by a video camera and the diorama to provide the augmented video data (Thurston: paragraph 37, line(s) 6-14 "This image may be entirely or partially computer generated and/or animated, may be captured earlier from a live action scene, or a combination thereof. The precursor image may be a single image displayed on the display wall or may be a sequence of images, such as frames of a video or animation. The precursor image may include precursor metadata for computer generated imagery"). Regarding claim 18, Thurston as modified by Major discloses the computer-implemented method of claim 17,wherein the virtual camera is further set based on camera viewpoint data for the video camera to specify the perspective view of the diorama (Thurston: paragraph 64, line(s) 1-6 "A virtual scene (e.g., virtual scene 106 of FIG. 1) is a scene described by computer-readable data structures that may include virtual scene elements (e.g., virtual objects 180 and 190 of FIG. 1), lighting information, one or more virtual camera viewpoints, and one or more virtual cameras view frame"). Regarding claim 19, Thurston as modified by Major discloses the computer-implemented method of claim 18, wherein the output device is a portable device and is one of a mobile phone, a tablet, a television (TV) device, and a laptop computer (Major: paragraph 16, line(s) 6-7 "The system 100 may include a handheld computing device, such as a tablet computer, a smart phone, and/or the like"; also, paragraph 76, line(s) 17-18 "output devices 630 may include various output subsystems, such as one or more displays, speakers, and/or the like."; also, paragraph 74, line(s) 1-4 "the viewable model may be output on a display. As discussed above, viewable models may be integrated into any number of viewing platforms for viewing by an end-user"). Regarding claim 20, Thurston as modified by Major discloses the computer-implemented method of claim 19, wherein the diorama is animated to provide a visual representation of the scene (Thurston: paragraph 12, line(s) 29-53"In some implementations, the first calibration image includes at least one of lines, circles, polygons, and/or photographic images. In some implementations, the first calibration image includes shapes of varying sizes, line weights, or colors. In some implementations, the first calibration image is two-dimensional. In some implementations, the first calibration image includes two-dimensional elements at different depths or plane orientations. In some implementations, the first calibration image is three-dimensional. In some implementations, the first calibration image is positioned in the virtual scene at a depth of a surface of the virtual scene display. In some implementations, the first calibration image is positioned in the virtual scene at a depth different than a surface of the virtual scene display. In some implementations, the first calibration image is animated."). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAI WEI TOMMY LI whose telephone number is (571)272-1170. The examiner can normally be reached 6:00AM-4:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAI W LI/Junior Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month