Prosecution Insights
Last updated: April 19, 2026
Application No. 18/793,212

DISPLAYING A SCENE TO A SUBJECT WHILE CAPTURING THE SUBJECTS ACTING PERFORMANCE USING MULTIPLE SENSORS

Non-Final OA §103§DP
Filed
Aug 02, 2024
Examiner
BEUTEL, WILLIAM A
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Netflix Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
328 granted / 469 resolved
+7.9% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the grounds of nonstatutory double patenting. Claims 1-3, 8-10, 13-16 and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5 and 10 of U.S. Patent No. 11,810,254. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the present application are anticipated by the claims contained in U.S. Patent No. 11,810,254. The following table illustrates the conflicting claim pairs: Present App 1 2 3 8 9 10 13 14 15 16 20 US Pat 11,810,254 1 5 10 1 1 1 1 1 5 10 1 The following table illustrates the limitations of claim 1 of the present application when compared against the limitations of claim 1 of U.S. Patent No. 11,810,254: Claim 1 of Present Application Claim 1 of U.S. Patent No. 11,810,254 1. A system comprising: 1. A system comprising: a first set of display panels surrounding an area; a plurality of sensors positioned at multiple different angles around an area, each sensor being configured to capture sensor data of a subject positioned within the area; a set of sensors positioned at a set of angles, each angle different from other angles of the set, each sensor of the set configured to capture sensor data of a subject positioned within the area; one or more repositioning systems coupled to one or more of the sensors, the repositioning systems being configured to reposition the sensors in response to receiving an instruction; and one or more repositioning systems coupled to at least a subset of the sensors, a repositioning system configured to reposition a sensor in response to receiving an instruction; and a controller coupled to the plurality of sensors, the controller being configured to: a controller coupled to the first set of display panels and to the set of sensors, the controller configured to: transmit content to the first set of display panels for display, the content displayed by the first set of display panels comprising a multidimensional scene; receive sensor data of the subject within the area captured by the plurality of sensors; receive sensor data of the subject within the area captured by a plurality of sensors of the set of sensors while the multidimensional scene is displayed by the first set of display panels; determine, based on the sensor data, a relative motion of the subject with respect to one or more objects within the area; determine, based on the sensor data, a relative motion of the subject with respect to the first set of display panels; generate instructions for repositioning one or more of the sensors based on the determined relative motion of the subject within the area; and generate instructions for repositioning one or more of the sensors based on the determined location relative motion of the subject within the area; and transmit the instructions to at least one of the repositioning systems to reposition the sensors according to the generated instructions. transmit the instructions to at least a set of the one or more repositioning systems to reposition at least the subset of sensors according to the generated instructions. The following table illustrates the limitations of claim 14 of the present application when compared against the limitations of claim 1 of U.S. Patent No. 11,810,254: Claim 14 of Present Application Claim 1 of U.S. Patent No. 11,810,254 14. A video capture device comprising: 1. A system comprising: a first set of display panels surrounding an area; a plurality of sensors positioned at multiple different angles around an area, each sensor being configured to capture sensor data of a subject positioned within the area; a set of sensors positioned at a set of angles, each angle different from other angles of the set, each sensor of the set configured to capture sensor data of a subject positioned within the area; one or more repositioning systems coupled to one or more of the sensors, the repositioning systems being configured to reposition the sensors in response to receiving an instruction; and one or more repositioning systems coupled to at least a subset of the sensors, a repositioning system configured to reposition a sensor in response to receiving an instruction; and a controller coupled to the plurality of sensors, the controller being configured to: a controller coupled to the first set of display panels and to the set of sensors, the controller configured to: transmit content to the first set of display panels for display, the content displayed by the first set of display panels comprising a multidimensional scene; receive sensor data of the subject within the area captured by the plurality of sensors; receive sensor data of the subject within the area captured by a plurality of sensors of the set of sensors while the multidimensional scene is displayed by the first set of display panels; determine, based on the sensor data, a relative motion of the subject with respect to one or more objects within the area; determine, based on the sensor data, a relative motion of the subject with respect to the first set of display panels; generate instructions for repositioning one or more of the sensors based on the determined relative motion of the subject within the area; and generate instructions for repositioning one or more of the sensors based on the determined location relative motion of the subject within the area; and transmit the instructions to at least one of the repositioning systems to reposition the sensors according to the generated instructions. transmit the instructions to at least a set of the one or more repositioning systems to reposition at least the subset of sensors according to the generated instructions. The following table illustrates the limitations of claim 20 of the present application when compared against the limitations of claim 1 of U.S. Patent No. 11,810,254: Claim 1 of Present Application Claim 1 of U.S. Patent No. 11,810,254 20. An apparatus comprising: 1. A system comprising: a first set of display panels surrounding an area; a plurality of sensors positioned at multiple different angles around an area, each sensor being configured to capture sensor data of a subject positioned within the area; a set of sensors positioned at a set of angles, each angle different from other angles of the set, each sensor of the set configured to capture sensor data of a subject positioned within the area; one or more repositioning systems coupled to one or more of the sensors, the repositioning systems being configured to reposition the sensors in response to receiving an instruction; and one or more repositioning systems coupled to at least a subset of the sensors, a repositioning system configured to reposition a sensor in response to receiving an instruction; and a controller coupled to the plurality of sensors, the controller being configured to: a controller coupled to the first set of display panels and to the set of sensors, the controller configured to: transmit content to the first set of display panels for display, the content displayed by the first set of display panels comprising a multidimensional scene; receive sensor data of the subject within the area captured by the plurality of sensors; receive sensor data of the subject within the area captured by a plurality of sensors of the set of sensors while the multidimensional scene is displayed by the first set of display panels; determine, based on the sensor data, a relative motion of the subject with respect to one or more objects within the area; determine, based on the sensor data, a relative motion of the subject with respect to the first set of display panels; generate instructions for repositioning one or more of the sensors based on the determined relative motion of the subject within the area; and generate instructions for repositioning one or more of the sensors based on the determined location relative motion of the subject within the area; and transmit the instructions to at least one of the repositioning systems to reposition the sensors according to the generated instructions. transmit the instructions to at least a set of the one or more repositioning systems to reposition at least the subset of sensors according to the generated instructions. Claims 4 and 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,810,254 in view of Koch et al (US 2015/0294492 A1). Regarding claim 4, the limitations included from clam 2 are rejected under double patenting based on the same rationale as claim 2 set forth above. Further regarding claim 4, Koch discloses: wherein the controller is further configured to render video data that depicts an acting performance of the subject in a video scene using the generated three-dimensional representation of the subject. (Koch, ¶3: A geometry and texture of the 3-D representation may be generated based on the plurality of 2D video sequences, and motion of the 3-D representation in the virtual 3-D space may be based on motion of the subject in the real 3-D space; ¶30-31 discloses using actor for motion sequence and generating new synthetic views of motion sequence, such as fight scenes, etc.; ¶45: generating a 3-D representation of the subject in a virtual 3-D space (104), where the texture and geometry of the 3-D representation can be generated based on the texture and geometries captured in the plurality of 2-D video sequences from the cameras, such that the motion of the 3-D representation in the virtual 3-D space can match the motion of the subject in the real 3-D space; ¶47: generating 2D video sequence using new camera view in virtual 3D space) US Pat. No. 11,810,254 and Koch are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by US Pat. No. 11,810,254, by using the captured video of a subject to generate a 3D model for providing a textured animated representation for generating video as provided by Koch, using known electronic interfacing and programming techniques. The modification results in an improved video processing system for generating actor-based film or video by allowing easier implementation of additional special effects, and providing easier implementation for more dynamic and artistic content. Regarding claim 17, the system of claim 4 comprises substantially the same components as claim 17 and as such claim 17 is rejected based on the same rationale as claim 4 set forth above. Claims 5-6 and 18-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,810,254 in view of Koch et al (US 2015/0294492 A1) and Cordes et al. (US 2020/0145644 A1). Regarding claim 5, the limitations included from claim 4 are rejected under double patenting for the same reasons as set forth above for claim 4. Further regarding claim 5, Cordes discloses: wherein the video scene comprises at least one of a two-dimensional video scene or a three-dimensional video scene. (Cordes, Fig. 2 and ¶81: scenery images presented on displays 104 to generate immersive environment; ¶86 discloses dynamically changing scenery images, i.e. clouds moving or trees blowing in wind; ¶114 discloses video display screens; ¶125: the processing unit or another component of system 1400 can include and/or operate a real-time gaming engine or other similar real-time rendering engine. Such an engine can render two-dimensional (2D) images from 3D data at interactive frame rates (e.g., 24, 48, 72, 96, or more frames per second). In one aspect, the real-time gaming engine can load the virtual environment for display on the displays surrounding the performance area) US Pat. No. 11,810,254 and Koch are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by US Pat. No. 11,810,254, using the captured video of a subject to generate a 3D model for providing a textured animated representation for generating video as provided by Koch, by further including the scene data type as provided by Cordes, using known electronic interfacing and programming techniques. The modification results in an improved video processing system for generating actor-based film or video by allowing for different types of common scene data for use, providing improved images presentation and allowing for more versatile and creator-designed use. Regarding claim 18, the system of claim 5 comprises substantially the same components as claim 18 and as such claim 18 is rejected based on the same rationale as claim 5 set forth above. Regarding claim 6, Cordes further discloses: wherein the rendered video data includes the acting performance of the subject in the video scene in combination with background environment content that is rendered from the perspective of the subject (Cordes, ¶8: Based on the orientation and movement (and other attributes such as lens aperture and focal length) of the taking camera, the content production system can adjust a portion of the virtual environment displayed by the immersive cave or walls in real-time or at interactive frame rates to correspond to orientation and position of the camera. In this way, images of the virtual environment can be perspective-correct (from the tracked position and perspective of the taking camera) over a performance of the performer; ¶13: the images of the virtual environment can be updated over a performance such that the perspective of the virtual environment displayed compensates for corresponding changes to the positioning and orientation of the taking camera, where a particular portion of the LED or LCD walls can display images of the global view render or images of the perspective-correct render depending on the position and orientation of the taking camera at a given point during a performance; ¶15: The performer is at least partially surrounded by one or more displays presenting images of a virtual environment; ¶85: taking camera 112 can move during a performance as performer 210 moves – i.e. coordinating movement results in content rendered from perspective of subject, as when performer moves, camera moves and perspective of virtual environment is updated to change in positioning) US Pat. No. 11,810,254 and Koch are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by US Pat. No. 11,810,254, using the captured video of a subject to generate a 3D model for providing a textured animated representation for generating video as provided by Koch, by further including the scene data type as provided by Cordes, using known electronic interfacing and programming techniques. The modification results in an improved video processing system for generating actor-based film or video by allowing for different types of common scene data for use, providing improved images presentation and allowing for more versatile and creator-designed use, and better accommodating the aspects of the performance for improved coherence between video data. Regarding claim 19, the system of claim 6 comprises substantially the same components as claim 19 and as such claim 19 is rejected based on the same rationale as claim 6 set forth above. Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,810,254 in view of Calabrese et al. (Calabrese et al., “DHP19: Dynamic Vision Sensor 3D Human Pose Dataset, 2019; 10 pages – reference provided in 8/28/2025 IDS). Regarding claim 7, the limitations included from claim 1 are rejected based on the same rationale as the double patenting rejection for claim 1 set forth above. Further regarding claim 7, Calabrese discloses: wherein the controller is further configured to filter one or more other objects in the area that are within a field of view of the plurality of sensors. (Calabrese, p. 5, left column, “DVS events preprocessing”: The raw event streams are preprocessed using a set of filters to clean them from the unwanted signal, including removing background activity, hot pixels, and mask out spots where events are generated due to infrared light emitted from BMC cameras) US Pat. No. 11,810,254 and Koch are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by US Pat. No. 11,810,254, by further utilizing the filtering of image data to remove unwanted objects/features/noise as provided by Calabrese, using known electronic interfacing and programming techniques. The modification results in an improved image processing of videos of actors by cleaning up the images to remove unwanted or undesired features of the recorded video for clearer and improved visual results. Claim 11 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,810,254 in view of Khalid et al. (US 2017/0287220 A1). Regarding claim 11, the limitations included from claim 8 are rejected based on the same rationale as the double patenting rejection of claim 8 set forth above. Further regarding claim 11, Khalid discloses: wherein the video content comprises video data captured using a 360-degree camera (In particular, Khalid discloses a technique for capturing surround video for presentation in a fully immersive environment using a 360 degree camera; in particular, Khalid, ¶27: 360 degree video; Fig. 4 and ¶51: system operates with 360-degree camera 402 to capture and generate 3650 degree image of real-world scenery corresponding to camera; ¶52: camera 402 incorporated with content creator system to capture and process representative 360 degree images of real-world scenery and transmit to data to system to process, where After preparing and/or processing the data representative of the 360-degree images to generate an immersive virtual reality world based on the 360-degree images, system 100 may provide overall data representative of the immersive virtual reality world to media player devices 206; Figs. 10A-10B and ¶¶85-86 discusses perspective of view for user with content sectors 1002; Also see Fig. 13 and ¶97) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by US Pat. No. 11,810,254, by obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, using known electronic interfacing and programming techniques. The modification merely substitutes one known type of image data for surround scenery images and video for another, yielding predictable results of obtaining video data for an immersive display from a 360-degree video camera, as opposed to other types of capture techniques or other types of video footage. Moreover, the modification results in an improved immersive content creation system by allowing for additional capture of scenery for use with the content generation system, allowing more creative input by a user/director for generating the immersive content. Claim 12 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,810,254 in view of Khalid et al. (US 2017/0287220 A1), Cordes et al. (US 2020/0145644 A1) and Sanders et al. (US 2015/0348326 A1). Regarding claim 12, the limitations included from claim 11 are rejected based on the same rationale as claim 11 set forth above. Further regarding claim 12, Khalid further discloses: wherein the 360-degree camera captures a target scene (In particular, Khalid discloses a technique for capturing surround video for presentation in a fully immersive environment using a 360 degree camera; in particular, Khalid, ¶27: 360 degree video; Fig. 4 and ¶51: system operates with 360-degree camera 402 to capture and generate 3650 degree image of real-world scenery corresponding to camera; ¶52: camera 402 incorporated with content creator system to capture and process representative 360 degree images of real-world scenery and transmit to data to system to process, where After preparing and/or processing the data representative of the 360-degree images to generate an immersive virtual reality world based on the 360-degree images, system 100 may provide overall data representative of the immersive virtual reality world to media player devices 206; Figs. 10A-10B and ¶¶85-86 discusses perspective of view for user with content sectors 1002; Also see Fig. 13 and ¶97) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by US Pat. No. 11,810,254, by obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, using known electronic interfacing and programming techniques. The modification merely substitutes one known type of image data for surround scenery images and video for another, yielding predictable results of obtaining video data for an immersive display from a 360-degree video camera, as opposed to other types of capture techniques or other types of video footage. Moreover, the modification results in an improved immersive content creation system by allowing for additional capture of scenery for use with the content generation system, allowing more creative input by a user/director for generating the immersive content. Cordes discloses: wherein the sensor data captures an acting performance of the subject using the plurality of sensors, (US Patent claim 1 states: capturing sensor data of an acting performance of the subject within the area using a set of sensors positioned at a set of angle; Cordes, ¶¶7-8: images presented in immersive environment in which a performance area is completely surrounded by display screens on which the immersive content is presented, and performance of performer as well as virtual environment displayed on image is captured by a camera; ¶81: Scenery images 214 of the virtual environment can be presented on the displays 104 to generate the immersive environment in which performer 210 can conduct his or her performance; ¶93: And, as immersive content is presented and updated on the displays, taking camera 112 can film the performance at the frame rate generating video of one or more performers and/or props on the stage with the immersive content generated in block 506 and displayed per block 508 in the background) and a target scene for the subject's acting performance (Cordes, ¶29: capturing plurality of images of performer performing in a performance area using a camera, and generating content based on the plurality of captured images; ¶109: At the end of the filming session, content captured by the taking cameras can then be used or further processed using various post processing techniques and systems to generate content, such as movies, television programming, online or streamed videos, etc.) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by US Pat. No. 11,810,254, by obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, by further including the scene data type as provided by Cordes, using known electronic interfacing and programming techniques. The modification results in an improved video processing system for generating actor-based film or video by allowing for different types of common scene data for use, providing improved images presentation and allowing for more versatile and creator-designed use, and better accommodating the aspects of the performance for improved coherence between video data. Sanders discloses: a target scene into which the subject's acting performance is to be inserted (Sanders, ¶59: subject 802 can be captured by camera 806 in a motion capture scenario; ¶60: The performance of subject 802 can be received by a computer system that uses the motion capture information to influence the movement of a virtual character 808, and the 3-D virtual scene and the movement of the virtual character 808 can be rendered in real time and presented in the second virtual-reality environment 812; ¶62: recorded subject performance combined in single virtual reality environment – see Fig. 8) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by US Pat. No. 11,810,254, including obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, and including the scene data type as provided by Cordes, by further inserting an actor’s performance into a separately obtained scene as provided by Sanders, using known electronic interfacing and programming techniques. The modification results in an improved system and technique for generating video based on an actor’s performance by allowing for more creative license and more easily allowing for additional effects without requiring all production to be local in space and time; also reducing costs. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 8-10, 13-15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. (US 2020/0145644 A1) in view of Aman et al. (US 2007/0279494 A1). Regarding claim 1, Cordes discloses: A system (Cordes, Abstract and ¶68: immersive content production system; ¶44: an immersive content presentation system can include one or more processors; and one or more memory devices comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform operations; Fig. 14 and ¶122: computer system) comprising: a plurality of sensors positioned at multiple different angles around an area, each sensor being configured to capture sensor data of a subject positioned within the area (Cordes, Fig. 1 and ¶73: camera attached to rig aimed at performance area to capture performance of a performer, and other cameras (e.g., motion capture cameras 122 discussed below) can be directed at the taking camera configured to capture the performance and one or more markers can be attached to the taking camera; Also Fig. 7 and ¶101: Some embodiments of the invention include multiple taking cameras; ¶102: The taking cameras 112a, 112b can be pointed in different directions and have different fields of views); one or more repositioning systems coupled to one or more of the sensors, the repositioning systems being configured to reposition the sensors in response to receiving an instruction (Cordes, ¶73: sensors can be used to determine the position and orientation of the taking camera during a performance, where the taking camera is moved and/or oriented during the performance); and a controller coupled to the plurality of sensors, (Cordes, Fig. 14 and ¶122: computer system including a processing unit 1404 connected to cameras 1434; ¶125 discloses the processor executing programs and processes for system) the controller being configured to: receive sensor data of the subject within the area captured by the plurality of sensors (Cordes, ¶77: In some embodiments, content production system 100 can further include one or more depth sensors 120 and/or one or more motion capture cameras 122. During a performance performed within the performance area 102, content production system 100 can detect the motion and/or positioning of one or more performers within the performance area. Such detection can be based on markers or sensors worn by a performer as well as by depth and/or other motion detection sensors 120 and/or by motion capture cameras 122. For example, an array of depth sensors 120 can be positioned in proximity to and directed at the performance area 102. For instance, the depth sensors 120 can surround the perimeter of the performance area. In some embodiments, the depth sensors 120 measure the depth of different parts of a performer in performance area 102 over the duration of a performance. The depth information can then be stored and used by the content production system to determine the positioning of the performer over the course of the performance. ); determine, based on the sensor data, a relative motion of the subject with respect to one or more objects within the area (Cordes, ¶77: the depth sensors 120 measure the depth of different parts of a performer in performance area 102 over the duration of a performance, and the depth information can then be stored and used by the content production system to determine the positioning of the performer over the course of the performance; ¶78: the one or more depth sensors 120 can receive emitted infrared radiation to generate 3-D depth models of a performer, along with the floor, walls, and/or ceiling of the first performance area 102; ¶79: Motion cameras 122 can be part of a motion capture system that can track the movement of performers or objects within system 100); (Cordes, ¶85: taking camera 112 can move during a performance as performer 210 moves or to capture the performer from a different angle); and t Cordes does not explicitly disclose the particular manner in which instructions are transmitted to the repositioning system, as claimed. Aman discloses: determine, based on the sensor data, a relative motion of the subject with respect to one or more objects within the area (Aman, ¶55: collecting overhead film used for tracking; ¶57: analyzing the video stream to determine both the location and orientation of the participants and game objects; ¶376: automatic game filming system dynamically determines what game actions to follow on tracking surface; ¶377: system 100 is able to determine the location, such as (rx, cx) of the center of the player's helmet sticker 9a, that serves as an acceptable approximation of the current location of the player 10, and filming station can be dedicated to follow key players) generate instructions for repositioning one or more of the sensors based on the determined relative motion of the subject within the area (Aman, ¶112: automatically tracking participant and game object movement using a multiplicity of substantially overhead viewing cameras; ¶114: collecting video from one or more perspective view cameras that are automatically directed to follow the game action based upon the determined participant and game object movement; and transmit the instructions to at least one of the repositioning systems to reposition the sensors according to the generated instructions (Aman, ¶83: the present inventors prefer the use of automated perspective filming cameras whose pan and tilt angles as well as zoom depths are automatically controlled based upon information derived in real-time from the overhead tracking system; ¶378: once system 200 has processed tracking data from system 100 and determined its desired centers-of-views 201, it will then automatically transmit these directives to the appropriate filming stations, such as 40c, located throughout the playing venue, where “processing element 45a, of station 40c, receives directives from system 200 and controls the automatic functioning of pan motor 45b, tilt motor 45c and zoom motor 45d. Motors 45b, 45c and 45d effectively control the center of view of camera 45f-cv.”’ Also ¶379 discussing camera 45f is controllably panned, tiled, and zoomed to follow desired game action images of players) Both Cordes and Aman are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by Cordes, by incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording as provided by Aman, using known electronic interfacing and programming techniques. The modification results in an improved camera recording system that tracks movement of a subject for recording using a distributed and automated process for easier control over the camera without requiring human intervention and further allowing for a more efficient communication between components for improved coordinated control of the system components. Regarding claim 14, Cordes discloses: A video capture device comprising components. (Cordes, Abstract and ¶68: immersive content production system; ¶44: an immersive content presentation system can include one or more processors; and one or more memory devices comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform operations; Fig. 14 and ¶122: computer system) Further regarding claim 14, the device contains the same components performing the same operations as claim 1, and as such claim 14 is further rejected based on the same rationale as claim 1 set forth above. Regarding claim 20, Cordes discloses: An apparatus comprising components. (Cordes, Abstract and ¶68: immersive content production system; ¶44: an immersive content presentation system can include one or more processors; and one or more memory devices comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform operations; Fig. 14 and ¶122: computer system) Further regarding claim 20, the apparatus contains the same components performing the same operations as claim 1, and as such claim 20 is further rejected based on the same rationale as claim 1 set forth above. Regarding claim 2, Cordes modified by Aman further discloses: wherein the controller is further configured to generate a three-dimensional representation of the subject from the sensor data of the subject captured by the plurality of sensors (Cordes, ¶78: Software in the depth sensors 120 can process the IR information received from the depth sensor 120 and use an artificial intelligence machine-learning algorithm to map the visual data and create three-dimensional (3-D) depth models of solid objects in the first performance area 102. For example, the one or more depth sensors 120 can receive emitted infrared radiation to generate 3-D depth models of a performer, along with the floor, walls, and/or ceiling of the first performance area 102) Aman further discloses: wherein the controller is further configured to generate a three-dimensional representation of the subject from the sensor data of the subject captured by the plurality of sensors (Aman, ¶87: employing the information collected by the overhead cameras to create a topological three-dimensional profile of any and all participants who may happen to be in the same field-of-view of the current image) Both Cordes and Aman are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by Cordes, by using a the 3D profile generation of the subject for tracking as provided by Aman, using known electronic interfacing and programming techniques. The modification results in an improved video recording of moving subjects by better identifying and isolating the relevant data for tracking, for improved location data and providing improved image data for reconstruction of video (see e.g. Aman, ¶88) Regarding claim 15, the system of claim 2 comprises substantially the same components as claim 15 and as such claim 15 is rejected based on the same rationale as claim 2 set forth above. Regarding claim 8, Cordes further discloses: further comprising a plurality of display panels configured to display video content within the area (Cordes, Fig. 1 and ¶¶69-71: image displays 104 for displaying virtual environment content; ¶81: Scenery images 214 of the virtual environment can be presented on the displays 104 to generate the immersive environment in which performer 210 can conduct his or her performance) Regarding claim 9, Cordes further disclose: wherein the controller is further configured to transmit video content to the plurality of display panels for display, the video content displayed by the plurality of display panels comprising a multidimensional scene (Cordes, Fig. 2 and ¶81: Scenery images 214 of the virtual environment can be presented on the displays 104 to generate the immersive environment in which performer 210 can conduct his or her performance) Regarding claim 10, Cordes further disclose: wherein the sensor data is captured while the multidimensional scene is displayed by the plurality of display panels (Cordes, Fig. 2 and ¶81: Scenery images 214 of the virtual environment can be presented on the displays 104 to generate the immersive environment in which performer 210 can conduct his or her performance (e.g., act out a scene in a movie being produced; ¶82: Scenery images 214 can also provide background for the video content captured by a taking camera 112 (e.g., a visible light camera)) Regarding claim 13, Cordes further discloses: wherein the relative motion of the subject is determined relative to the plurality of display panels (Cordews, ¶8: Based on the orientation and movement (and other attributes such as lens aperture and focal length) of the taking camera, the content production system can adjust a portion of the virtual environment displayed by the immersive cave or walls in real-time or at interactive frame rates to correspond to orientation and position of the camera. In this way, images of the virtual environment can be perspective-correct (from the tracked position and perspective of the taking camera) over a performance of the performer; ¶13: the images of the virtual environment can be updated over a performance such that the perspective of the virtual environment displayed compensates for corresponding changes to the positioning and orientation of the taking camera, where a particular portion of the LED or LCD walls can display images of the global view render or images of the perspective-correct render depending on the position and orientation of the taking camera at a given point during a performance; ¶15: The performer is at least partially surrounded by one or more displays presenting images of a virtual environment; ¶85: embodiments of the invention can render the portion 326 of the displays 104 that corresponds to frustum 318 as perspective-correct images that can update based on movement of the taking camera 112, where taking camera 112 can move during a performance as performer 210 moves – i.e. coordinating movement results in content rendered from perspective of subject, as when performer moves, camera moves and perspective of virtual environment is updated to change in positioning) Claim(s) 3-6 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. (US 2020/0145644 A1) in view of Aman et al. (US 2007/0279494 A1) and in further view of Koch et al. (US 2015/0294492 A1). Regarding claim 3, the limitations included from clam 2 are rejected based on the same rationale as claim 2 set forth above. Further regarding claim 3, Koch discloses: generate a three-dimensional representation of the subject from the sensor data of the subject captured by the plurality of sensors (Koch, ¶3: The method may include receiving a plurality of 2-D video sequences of a subject in a real 3-D space. Each 2-D video sequence in the plurality of 2-D video sequences may depict the subject from a different perspective. The method may also include generating a 3-D representation of the subject in a virtual 3-D space; Also ¶30) wherein the controller is further configured to generate texture data for applying texture to the three-dimensional representation of the subject from the sensor data of the subject captured by the plurality of sensors. (Koch, ¶3: A geometry and texture of the 3-D representation may be generated based on the plurality of 2D video sequences, and motion of the 3-D representation in the virtual 3-D space may be based on motion of the subject in the real 3-D space; ¶30: surface properties and textural properties of the actor can be used to generate a 3-D geometry with textures and colors that match the actual actor at a level of detail that can capture every fold in the actor's clothing; ¶45: generating a 3-D representation of the subject in a virtual 3-D space (104), where the texture and geometry of the 3-D representation can be generated based on the texture and geometries captured in the plurality of 2-D video sequences from the cameras, such that the motion of the 3-D representation in the virtual 3-D space can match the motion of the subject in the real 3-D space; ¶46: generate 3D representation of geometry and textures can then be applied to the 3D representation) Cordes, Aman and Koch are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by Cordes, by incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording and using a the 3D profile generation of the subject for tracking as provided by Aman, by further using the captured video of a subject to generate a 3D model for providing a textured animated representation for generating video as provided by Koch, using known electronic interfacing and programming techniques. The modification results in an improved video processing system for generating actor-based film or video by allowing easier implementation of additional special effects, and providing easier implementation for more dynamic and artistic content. Regarding claim 16, the system of claim 3 comprises substantially the same components as claim 16 and as such claim 16 is rejected based on the same rationale as claim 3 set forth above. Regarding claim 4, the limitations included from clam 2 are rejected based on the same rationale as claim 2 set forth above. Further regarding claim 4, Koch discloses: wherein the controller is further configured to render video data that depicts an acting performance of the subject in a video scene using the generated three-dimensional representation of the subject. (Koch, ¶3: A geometry and texture of the 3-D representation may be generated based on the plurality of 2D video sequences, and motion of the 3-D representation in the virtual 3-D space may be based on motion of the subject in the real 3-D space; ¶30-31 discloses using actor for motion sequence and generating new synthetic views of motion sequence, such as fight scenes, etc.; ¶45: generating a 3-D representation of the subject in a virtual 3-D space (104), where the texture and geometry of the 3-D representation can be generated based on the texture and geometries captured in the plurality of 2-D video sequences from the cameras, such that the motion of the 3-D representation in the virtual 3-D space can match the motion of the subject in the real 3-D space; ¶47: generating 2D video sequence using new camera view in virtual 3D space) Cordes, Aman and Koch are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by Cordes, by incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording and using a the 3D profile generation of the subject for tracking as provided by Aman, by further using the captured video of a subject to generate a 3D model for providing a textured animated representation for generating video as provided by Koch, using known electronic interfacing and programming techniques. The modification results in an improved video processing system for generating actor-based film or video by allowing easier implementation of additional special effects, and providing easier implementation for more dynamic and artistic content. Regarding claim 17, the system of claim 4 comprises substantially the same components as claim 17 and as such claim 17 is rejected based on the same rationale as claim 4 set forth above. Regarding claim 5, Cordes further discloses: wherein the video scene comprises at least one of a two-dimensional video scene or a three-dimensional video scene. (Cordes, Fig. 2 and ¶81: scenery images presented on displays 104 to generate immersive environment; ¶86 discloses dynamically changing scenery images, i.e. clouds moving or trees blowing in wind; ¶114 discloses video display screens; ¶125: the processing unit or another component of system 1400 can include and/or operate a real-time gaming engine or other similar real-time rendering engine. Such an engine can render two-dimensional (2D) images from 3D data at interactive frame rates (e.g., 24, 48, 72, 96, or more frames per second). In one aspect, the real-time gaming engine can load the virtual environment for display on the displays surrounding the performance area) Regarding claim 18, the system of claim 5 comprises substantially the same components as claim 18 and as such claim 18 is rejected based on the same rationale as claim 5 set forth above. Regarding claim 6, Cordes further discloses: wherein the rendered video data includes the acting performance of the subject in the video scene in combination with background environment content that is rendered from the perspective of the subject (Cordes, ¶8: Based on the orientation and movement (and other attributes such as lens aperture and focal length) of the taking camera, the content production system can adjust a portion of the virtual environment displayed by the immersive cave or walls in real-time or at interactive frame rates to correspond to orientation and position of the camera. In this way, images of the virtual environment can be perspective-correct (from the tracked position and perspective of the taking camera) over a performance of the performer; ¶13: the images of the virtual environment can be updated over a performance such that the perspective of the virtual environment displayed compensates for corresponding changes to the positioning and orientation of the taking camera, where a particular portion of the LED or LCD walls can display images of the global view render or images of the perspective-correct render depending on the position and orientation of the taking camera at a given point during a performance; ¶15: The performer is at least partially surrounded by one or more displays presenting images of a virtual environment; ¶85: taking camera 112 can move during a performance as performer 210 moves – i.e. coordinating movement results in content rendered from perspective of subject, as when performer moves, camera moves and perspective of virtual environment is updated to change in positioning) Regarding claim 19, the system of claim 6 comprises substantially the same components as claim 19 and as such claim 19 is rejected based on the same rationale as claim 6 set forth above. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. (US 2020/0145644 A1) in view of Aman et al. (US 2007/0279494 A1), and in further view of Calabrese et al. (Calabrese et al., “DHP19: Dynamic Vision Sensor 3D Human Pose Dataset, 2019; 10 pages – reference provided in 8/28/2025 IDS) Regarding claim 7, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 7, Calabrese discloses: wherein the controller is further configured to filter one or more other objects in the area that are within a field of view of the plurality of sensors. (Calabrese, p. 5, left column, “DVS events preprocessing”: The raw event streams are preprocessed using a set of filters to clean them from the unwanted signal, including removing background activity, hot pixels, and mask out spots where events are generated due to infrared light emitted from BMC cameras) Cordes, Aman and Calabrese are directed to camera systems for capturing moving objects for recording video. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement as provided by Cordes, incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording as provided by Aman, by further utilizing the filtering of image data to remove unwanted objects/features/noise as provided by Calabrese, using known electronic interfacing and programming techniques. The modification results in an improved image processing of videos of actors by cleaning up the images to remove unwanted or undesired features of the recorded video for clearer and improved visual results. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. (US 2020/0145644 A1) in view of Aman et al. (US 2007/0279494 A1) and Khalid et al. (US 2017/0287220 A1). Regarding claim 11, the limitations included from claim 8 are rejected based on the same rationale as claim 8 set forth above. Further regarding claim 11, Cordes discloses presenting a fully immersive display of video content or scenery images surrounding a performer (see Cordes Fig. 2 and ¶80-81). The only limitation not explicitly taught is how the video content is captured. Cordes discusses use of 360-degree video, but is not explicitly clear as to the use (see Cordes, ¶¶138-140) Khalid discloses: wherein the video content comprises video data captured using a 360-degree camera (In particular, Khalid discloses a technique for capturing surround video for presentation in a fully immersive environment using a 360 degree camera; in particular, Khalid, ¶27: 360 degree video; Fig. 4 and ¶51: system operates with 360-degree camera 402 to capture and generate 3650 degree image of real-world scenery corresponding to camera; ¶52: camera 402 incorporated with content creator system to capture and process representative 360 degree images of real-world scenery and transmit to data to system to process, where After preparing and/or processing the data representative of the 360-degree images to generate an immersive virtual reality world based on the 360-degree images, system 100 may provide overall data representative of the immersive virtual reality world to media player devices 206; Figs. 10A-10B and ¶¶85-86 discusses perspective of view for user with content sectors 1002; Also see Fig. 13 and ¶97) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by Cordes, incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording as provided by Aman, by obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, using known electronic interfacing and programming techniques. The modification merely substitutes one known type of image data for surround scenery images and video for another, yielding predictable results of obtaining video data for an immersive display from a 360-degree video camera, as opposed to other types of capture techniques or other types of video footage. Moreover, the modification results in an improved immersive content creation system by allowing for additional capture of scenery for use with the content generation system, allowing more creative input by a user/director for generating the immersive content. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. (US 2020/0145644 A1) in view of Aman et al. (US 2007/0279494 A1) and Khalid et al. (US 2017/0287220 A1) in further view of Sanders et al. (US 2015/0348326 A1) Regarding claim 12, the limitations included from claim 11 are rejected based on the same rationale as claim 11 set forth above. Further regarding claim 12, Cordes discloses: wherein the sensor data captures an acting performance of the subject using the plurality of sensors, (Cordes, ¶¶7-8: images presented in immersive environment in which a performance area is completely surrounded by display screens on which the immersive content is presented, and performance of performer as well as virtual environment displayed on image is captured by a camera; ¶81: Scenery images 214 of the virtual environment can be presented on the displays 104 to generate the immersive environment in which performer 210 can conduct his or her performance; ¶93: And, as immersive content is presented and updated on the displays, taking camera 112 can film the performance at the frame rate generating video of one or more performers and/or props on the stage with the immersive content generated in block 506 and displayed per block 508 in the background) and a target scene for the subject's acting performance (Cordes, ¶29: capturing plurality of images of performer performing in a performance area using a camera, and generating content based on the plurality of captured images; ¶109: At the end of the filming session, content captured by the taking cameras can then be used or further processed using various post processing techniques and systems to generate content, such as movies, television programming, online or streamed videos, etc.; ) Cordes modified by Aman and Khalid further discloses: wherein the 360-degree camera captures a target scene (In particular, Khalid discloses a technique for capturing surround video for presentation in a fully immersive environment using a 360 degree camera; in particular, Khalid, ¶27: 360 degree video; Fig. 4 and ¶51: system operates with 360-degree camera 402 to capture and generate 3650 degree image of real-world scenery corresponding to camera; ¶52: camera 402 incorporated with content creator system to capture and process representative 360 degree images of real-world scenery and transmit to data to system to process, where After preparing and/or processing the data representative of the 360-degree images to generate an immersive virtual reality world based on the 360-degree images, system 100 may provide overall data representative of the immersive virtual reality world to media player devices 206; Figs. 10A-10B and ¶¶85-86 discusses perspective of view for user with content sectors 1002; Also see Fig. 13 and ¶97) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by Cordes, incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording as provided by Aman, by obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, using known electronic interfacing and programming techniques. The modification merely substitutes one known type of image data for surround scenery images and video for another, yielding predictable results of obtaining video data for an immersive display from a 360-degree video camera, as opposed to other types of capture techniques or other types of video footage. Moreover, the modification results in an improved immersive content creation system by allowing for additional capture of scenery for use with the content generation system, allowing more creative input by a user/director for generating the immersive content. The only limitation not explicitly taught is that an actor’s performance is inserted into a target scene, as opposed to recorded with the performance itself. Sanders discloses: a target scene into which the subject's acting performance is to be inserted (Sanders, ¶59: subject 802 can be captured by camera 806 in a motion capture scenario; ¶60: The performance of subject 802 can be received by a computer system that uses the motion capture information to influence the movement of a virtual character 808, and the 3-D virtual scene and the movement of the virtual character 808 can be rendered in real time and presented in the second virtual-reality environment 812; ¶62: recorded subject performance combined in single virtual reality environment – see Fig. 8) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success to modify the system and method for moving a camera based on a recorded subject’s movement which presents images on displays s as provided by Cordes, incorporating the technique of data communication between devices for controlling the automatic control of camera orientation/position to follow target objects for recording as provided by Aman, and obtaining the surround video images for presenting in a content creator system from a 360 degree camera as provided by Khalid, by further inserting an actor’s performance into a separately obtained scene as provided by Sanders, using known electronic interfacing and programming techniques. The modification results in an improved system and technique for generating video based on an actor’s performance by allowing for more creative license and more easily allowing for additional effects without requiring all production to be local in space and time; also reducing costs. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Crothers et al. (US 4,796,990) discloses technology from 1983 which deals with insertion of an object or actor within a separately created scene (See Crothers, Abstract and Figs. 4-6: PNG media_image1.png 658 435 media_image1.png Greyscale ) Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581262
AUGMENTED REALITY INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572258
APPARATUS AND METHOD WITH IMAGE PROCESSING USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Patent 12566531
CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561927
MEDIA RESOURCE DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554384
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month