Prosecution Insights
Last updated: April 19, 2026
Application No. 18/782,381

MODULAR OMNIDIRECTIONAL ACTUATED FLOORS PROVIDING A RECONFIGURABLE VIDEO PRODUCTION ENVIRONMENT

Non-Final OA §103
Filed
Jul 24, 2024
Examiner
DANG, HUNG Q
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Disney Enterprises Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
1257 granted / 1841 resolved
+10.3% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
95 currently pending
Career history
1936
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1841 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/29/2025 has been entered. Response to Arguments Applicant's arguments filed 10/29/2025 have been fully considered but they are not persuasive. On page 7, Applicant argues that, Regarding Shreve, the Office Action cites camera 122, which is used to track "a current location and orientation of: robotic device 168, user 102, other users in room 110; wall panel 112; floor panel 116; and any mobile components in room 110 (such as chair 164)." Shreve, ¶ [0057]. However, there is no disclosure of modifying the floor panel 116 to move at least one of the camera 122 or the user to position the user within the field of view of the camera 122. Rather, the floor panel 116 is operated to configure physical components (e.g., table 162, chair 164) within the room 110 based on instructions received from the user 102 outside the room 110 or space. In response, Examiner respectfully disagrees. At least in [0075]-[0076], Shreve teaches: [0075] Once the system has configured a scene for the user, the user may decide to re-configure the scene, which results in the system performing a scene transition. Recall that the navigation and tracking module can track the current location and orientation of the physical components (including the pre-configured props and the mobile components), the mobile robotic device, the user, and any other users in the physical space. Thus, the system can communicate and coordinate with the user(s) to suggest where the user should go during the scene transition, e.g., a message that instructs or notifies the user to sit in a particular chair for an estimated period of time. The system can calculate the estimated period of time based on the order and trajectories determined by the layout scheduler and the motion planner, and include that estimated period of time in the notification to the user. The system can also render, in the VR space in an area corresponding to the physical components which are being re-configured, visual cues (such as smoke, fog, a colored indicator, a mist-like shadow, or a cloud) to indicate to the user the area which is to be avoided by the user during this time, i.e., the area in which the physical components are being moved in response to instructions from the user to re-configure the room and the physical components in the room. [0076] In some embodiments, instead of laying out VR objects to be instantiated in the real world and subsequently rendered in VR, the user can dynamically re-configure the room by physically moving a physical component, and the system can dynamically render, in real-time, the physical component as it is being moved by the user. For example, given a physical space with several chairs and a table (as in FIGS. 4A and 4B), rendered in VR as outdoor patio furniture, the user can be sitting on a chair. The user can slide or shuffle his chair closer to the table. Using the navigation and tracking module, the system can track the movement of all the physical components and users in the room, including the chair and the user. The system can dynamically, in real-time, render VR imagery which corresponds to the chair being moved by the user. (emphasis added) Clearly, Shreve teaches a processor of a system can modify the configuration of the modular floor to move at least the user within the field of the camera by: 1) moving physical components and prompting the user to move to a specified location, or 2) moving a physical component which the user is sitting on (a chair as emphasized in [0076] of Shreve above, to a new location. Examiner further submits that the new locations in either (1) or (2) above are still within the field of view of the tracking camera, i.e. camera 122 as shown in Fig. 1. On page 7, Applicant also argues that, In addition, the Office Action cites the screen of the VR device 204. Office Action, p. 8. Unlike claim 1, the cited screen renders VR/AR scene 230 for viewing when the VR device 204 is worn by the user. Thus, the processor of Shreve cannot modify either the floor panel 116 or the floor of Smoot to move at least one of the camera 122 or the user to position the user, relative to the VR device screen, within the field of view of the camera 122. In response, Examiner respectfully submits that the display as recited would correspond to the display taught by Smoot in [0013], i.e. huge vistas or the display on walls of the physical environment. When combined with the teachings of Smoot, the user is therefore moved relative to the display. As such, Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 and are rejected under 35 U.S.C. 103 as being unpatentable over Smoot et al. (US 2018/0217662 A1 – hereinafter Smoot) and Shreve et al. (US 2020/0193150 A1 – hereinafter Shreve). Regarding claim 1, Smoot discloses a video production system comprising: a video production environment defining a physical environment and a digital environment both viewable by a plurality of users positioned in the physical environment, wherein the physical environment comprises a display for displaying digital content of the digital environment ([0013] – a video production environment comprising a physical environment in which a plurality of VR participants can physically walk as further shown in Fig. 9 and a digital environment displayed on huge vistas or on walls of the physical environment as VR scenes showing the environment in which each of the VR participants walks, the physical environment and the digital environment); a modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion of at least one user in contact with the modular floor ([0046]; [0055]; [0079] – a modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion for a user or users in contact with the modular floor, e.g. to move objects in a direction opposite to the one they are attempting to travel to avoid a collision); and a processor configured to modify a configuration of one of the video production environment or the modular floor based on a change of the other of the video production environment or the modular floor ([0013] - a processor configured to modify a configuration of one of the video production environment by displaying a distant object getting closer to the walking VR participant so that the VR participant has a sensation of walking significant distances in the VR environment without collision with walls or other objects on the module floor). However, Smoot does not disclose a camera having a field of view, wherein the processor is configured to modify the configuration of the modular floor to move at least one of the camera or the at least one user to position the at least one user, relative to the display, within the field of view of the camera. Shreve discloses a camera having a field of view (Fig. 1A; [0057] – camera 122), wherein a processor is configured to modify a configuration of a modular floor to move at least one of the camera or at least one user to position the at least one user within the field of view of the camera ([0075]; [0076] – at least a processor to modify a configuration of a modular floor comprising a plurality of physical components to move at least a user to a new position within the field of view of the camera by either moving physical components and prompting the user to move to a specified location, or moving a physical component which the user is sitting on (a chair as emphasized in [0076]) to the new location). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Shreve into the video production system taught by Smoot to make the environment more immersive by allowing the user to instantiate and to configure the objects in the VR environment. Further, one of ordinary skill in the art would have recognized that the display as recited would correspond to the display taught by Smoot in [0013], i.e. huge vistas or the display on walls of the physical environment. Thus, combined with the teachings of Smoot, the user is therefore moved relative to the display. Regarding claim 2, Smoot also discloses the video production system of claim 1, further comprising a sensor configured to detect an orientation, position, or movement of the at least one user in contact with the modular floor ([0055] – detecting orientation, position, or movement of the VR participant and other VR participants to independently move the VR participants in opposite direction in order to avoid collisions), wherein the processor is configured to modify the configuration of the video production environment based on the detected orientation, position, or movement of the at least one user ([0013] - a processor configured to modify a configuration of one of the video production environment by displaying a distant object getting closer to the walking VR participant so that the VR participant has a sensation of walking significant distances in the VR environment without collision with walls or other objects on the module floor). Regarding claim 3, Smoot also discloses the display is fixed in the physical environment to display the digital content to the plurality of users, wherein the processor is configured to adjust the digital content based on the detected orientation, position, or movement of the user or users in contact with the modular floor ([0013] – the display on the walls is fixed and the processor configured to display a distant object getting closer to the walking VR participant so that the VR participant has a sensation of walking significant distances in the VR environment without collision with walls or other objects on the module floor). Regarding claim 4, Smoot also discloses the video production system of claim 1, wherein the video production environment comprises a stage, and wherein the modular floor defines at least a portion of the stage (Figs. 8-9). Regarding claim 5, Smoot also discloses the video production system of claim 4, wherein the modular floor defines an infinitely adjustable path for the at least one user in contact with the stage ([0013]; [0055] – defining an infinitely adjustable path for the user or users because the user or users can walk indefinitely without collisions). Regarding claim 6, see the teachings of Smoot and Shreve as discussed in claim 1 above. also discloses the video production system of claim 1, wherein: the physical environment comprises a digital background ([0013] – any background of the scene displayed in the screen on the VR space walls or in the VR headset); and the processor is configured to modify the configuration of the physical environment to align the at least one user to the digital background based on the detected orientation, position, or movement of the user or users ([0013]; [0046] – based on the detected orientation, position, or movement of the user or users, the processor aligns the user or users to the digital background so that an object within the scene appears closer when the user walks toward the object). Shreve also discloses the physical environment comprises one or more props ([0073]-[0078]); and a processor is configured to modify the configuration of the modular floor to move the one or more props in alignment with the a digital background within the field of view of the camera and based on detected orientation, position, or movement of at least one user ([0073]-[0078] – aligning, by moving, the props within the digital background based on user’s movements). The motivation for incorporating the teachings of Shreve has been discussed in claim 1 above. Further in view of the display taught by Smoot displaying the digital background, the combined system would have resulted in the one or more props is/are moved in alignment with the display. Regarding claim 7, see the teachings of Smoot and Shreve as discussed in claim 1 above, in which Shreve in view of Smoot also discloses the processor is configured to modify the configuration of the modular floor based on the digital content displayed on a display ([0035]; [0036]; [0040]; Figs. 2A-2B – modifying a modular floor based on the digital background to have steps or stairs in view of Smoot disclosing the display). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the further teachings of Shreve into the video production system taught by Smoot to make the environment more immersive by allowing the user to steps or stairs in the VR environment. Regarding claim 8, Smoot also discloses the video production system of claim 1, wherein the modular floor is selectively actuated to adjust a walking surface for the at least one user ([0088] – adjusting a walking surface for the user by raising a segment or portion of the surface). Regarding claim 9, Shreve discloses a system comprising: a video production environment comprising a camera having a field of view, the camera is viewable by a plurality of users positioned in the video production environment (Figs. 1A-1C – camera 122), a stage (Fig. 1A-1C – room 110 and room 160), and a display (Fig. 2A-2B – a screen to display the VR/AR scene 230); a modular floor (Figs. 1A-1C – a modular floor having a floor panel 116 that can be raised as shown in Figs. 2A-2B); a sensor configured to detect a characteristic of a user in contact with the modular floor (Figs. 1A-1C; [0057] – a sensor 120 configured to monitor movement of any physical component and a user in a physical space); and a processor in communication with the sensor, wherein the processor is configured to modify a configuration of one of the video production environment or the modular floor based on a change of the other of the video production environment or the modular floor to define or maintain a scene for filming by the camera ([0073]-[0078] – a processor in communication with the sensor to move objects within the digital background based on user’s movements, defining a scene for filming by the camera 122, at least for tracking purpose), and wherein a processor is configured to modify a configuration of a modular floor to move at least one of the camera or at least one user to position the at least one user within the field of view of the camera ([0075]; [0076] – at least a processor to modify a configuration of a modular floor comprising a plurality of physical components to move at least a user to a new position within the field of view of the camera by either moving physical components and prompting the user to move to a specified location, or moving a physical component which the user is sitting on (a chair as emphasized in [0076]) to the new location). However, Shreve does not disclose the display is all viewable by a plurality of users positioned in the video production environment; and the modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion for the user in contact with the modular floor. Smoot discloses a display all viewable by a plurality of users positioned in the video production environment (Fig. 2A-2B; [0013] – a screen on the VR space walls to display the VR/AR scene 230); and a modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion for at least one user in contact with the modular floor ([0046]; [0055]; [0079] – a modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion for a user or users in contact with the modular floor, e.g. to move objects in a direction opposite to the one they are attempting to travel to avoid a collision). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Smoot into the system taught by Shreve to allow the users walk in the environment without collisions with other objects or users. Thus, in view of Smoot teaching the display as walls of the or huge vistas etc. in the physical environment in [0013], the user in Shreve is therefore moved relative to the display. Regarding claim 10, Shreve in view of Smoot also discloses the system of claim 9, wherein the detected characteristic of the user comprises at least one of an orientation, a position, or a movement of the user in contact with the modular floor (Figs. 1A-1C; [0057] – a sensor 120 configured to monitor movement of any physical component and a user in a physical space), and wherein the processor is configured to modify at least one of the camera or the display based on the detected characteristic of the at least one user ([0057]; [0073]-[0078] – a processor configured to modify the display to display the corresponding VR environment). Regarding claim 11, Shreve in view of Smoot also discloses the system of claim 9, wherein the modular floor defines at least a portion of a stage (Figs. 1A-1C), and wherein the processor is configured to modify the stage based on an output of the display ([0035]; [0063]-[0066]; Figs. 3A-3C – configuring and instantiating objects in VR). Regarding claim 12, Shreve in view of Smoot also discloses the system of claim 9, wherein the video production environment defines a physical environment and a digital environment wherein the physical environment is a same physical space for the plurality of users, and wherein the processor is configured to modify the video production environment to align the physical environment and the digital environment for video production (Figs. 3A-3B – in view of Smoot disclosing the plurality of users in [0013]). Regarding claim 13, see the teachings of Shreve and Smoot as discussed in claim 12 above, in which Shreve in view of Smoot also discloses the modular floor is configured to move the camera ([0012]; [0057] – in at least one embodiment, the camera is a moveable camera, thus being moved to track the props, the user, etc.), one or more props, and the user, relative to the display, to align the one or more props, the user, and the display within the field of view of the camera ([0075]; [0076] – at least a processor to modify a configuration of a modular floor comprising a plurality of physical components to move at least a user to a new position within the field of view of the camera by either moving physical components and prompting the user to move to a specified location, or moving a physical component which the user is sitting on (a chair as emphasized in [0076]) to the new location). Further, one of ordinary skill in the art would have recognized that the display as recited would correspond to the display taught by Smoot in [0013], i.e. huge vistas or the display on walls of the physical environment. Thus, combined with the teachings of Smoot, the movements of the camera, the user, and one or more props are relative to the display. Regarding claim 14, Shreve also discloses the system of claim 9, wherein the sensor is configured to track the at least one user relative to the camera ([0057] – sensors worn by the user tracks the user relative to the camera). Regarding claim 15, Smoot discloses a video production system comprising: a video production environment ([0013] – a VR environment portraying huge vistas of VR scenes within which a VR participant walks) comprising a stage (Fig. 8), and a display, the display configured to provide a digital background for simultaneous viewing by a plurality of users positioned in the video production environment ([0013] – a screen displaying a digital background of a scene on a VR space so that a plurality of users can view simultaneously); a modular floor defining at least a portion of the stage (Fig. 8), the modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion of at least one user in contact with the modular floor ([0046]; [0055]; [0079] – a modular floor comprising a plurality of tiles configured to move independently to induce or respond to a motion for a user or users in contact with the modular floor, e.g. to move objects in a direction opposite to the one they are attempting to travel to avoid a collision); a sensor configured to detect an orientation, a position, or a movement of the at least one user on the modular floor ([0051]; [0055] – a sensor detecting orientation, position, or movement of the VR participant and other VR participants to independently move the VR participants in opposite direction in order to avoid collisions); and a processor configured to modify a configuration of the video production environment based on the detected orientation, position, or movement of the at least one user to align the at least one user to the digital background for video production ([0013]; [0046] – based on the detected orientation, position, or movement of the user or users, the processor aligns the user or users to the digital background so that an object within the scene appears closer when the user walks toward the object). However, Smoot does not disclose the video production environment comprising a camera having a field of view; and the processor configured to modify a configuration of the video production environment based on the detected orientation, position, or movement of the at least one user to align the camera to the digital background for video production; and modify the configuration of the modular floor to move at least one of the camera or the at least one user to position the at least one user, relative to the display, within the field of view of the camera. Shreve discloses a video production environment comprising a camera having a field of view (Fig. 1A; [0012]; [0057]; [0073] – camera 122 or movable cameras having a field of view to track the objects and the users), wherein a processor is configured to modify the configuration of the video production environment to align a camera to a digital background based on detected orientation, position, or movement of at least one user (Fig. 1A; [0012]; [0057]; [0073] – at least aligning the field of view of the camera to the user and the physical objects thus to the corresponding digital background, which is a VR scene where images of the user and the objects appear – since the cameras are used to continuously track the orientation, position, or movement of the users, the current alignment of the cameras is automatically based on the previously detected orientation, position, or movement of the users); and modify a configuration of a modular floor to move at least one of a camera or at least one user to position the at least one user within the field of view of the camera ([0075]; [0076] – at least a processor to modify a configuration of a modular floor comprising a plurality of physical components to move at least a user to a new position within the field of view of the camera by either moving physical components and prompting the user to move to a specified location, or moving a physical component which the user is sitting on (a chair as emphasized in [0076]) to the new location). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Shreve into the video production system taught by Smoot to make the environment more immersive by allowing the user to instantiate and to configure the objects in the VR environment. Further, one of ordinary skill in the art would have recognized that the display as recited would correspond to the display taught by Smoot in [0013], i.e. huge vistas or the display on walls of the physical environment. Thus, combined with the teachings of Smoot, the user is therefore moved relative to the display taught by Smoot and within the field of view of the camera taught by Shreve. Regarding claim 16, Smoot in view of Shreve also discloses the video production system of claim 15, wherein the video production environment defines a physical environment and a digital environment, wherein the digital environment comprises the digital background, and wherein the processor is configured to modify the configuration of the video production environment to align the physical environment and the digital environment ([0013] – displaying a distant object closer to a user when the user walks toward the object by walking in the physical environment). Regarding claim 17, see the teachings of Smoot and Shreve as discussed in claim 16 above. However, Shreve in view of Smoot also discloses the modular floor is configured to move the camera and the at least one user independently on the stage to align the camera and the at least one user relative to the display ([0046] – moving multiple objects in multiple directions and moving the movable cameras independently relative to the display taught by Smoot in [0013]). Regarding claim 18, Smoot in view of Shreve also discloses the video production system of claim 15, wherein the modular floor is configured to position at least one user and one or more objects on the stage ([0046]; Fig. 8). Shreve discloses objects are one or more props ([0073]-[0078]). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the further teachings of Shreve into the video production system proposed in claim 15 to make the environment more immersive by allowing the user to use props in a manner that makes the stage more physically realistic. Regarding claim 19, Smoot in view of Shreve also discloses the video production system of claim 15, wherein the processor is configured to adjust the digital background based on the detected orientation, position, or movement of the at least one user ([0013] – displaying a distant object closer as the user walks toward the object). Regarding claim 20, Smoot in view of Shreve also discloses the video production system of claim 15, wherein the sensor comprises at least one of a light detection and ranging (LIDAR) system, a second camera, or a wearable motion capture device ([0077]-[0078] – ultrasonic transducers worn by the user). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUNG Q DANG/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Jul 24, 2024
Application Filed
Jul 07, 2025
Non-Final Rejection — §103
Jul 17, 2025
Interview Requested
Jul 24, 2025
Examiner Interview Summary
Jul 24, 2025
Applicant Interview (Telephonic)
Jul 29, 2025
Response Filed
Sep 07, 2025
Final Rejection — §103
Oct 29, 2025
Request for Continued Examination
Nov 06, 2025
Response after Non-Final Action
Mar 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594460
MANAGING BLOBS FOR TRACKING OF SPORTS PROJECTILES
2y 5m to grant Granted Apr 07, 2026
Patent 12588818
DETECTION OF A MOVABLE OBJECT WHEN 3D SCANNING A RIGID OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12592258
METHOD AND APPARATUS FOR INTERACTIVE VIDEO EDITING PLATFORM TO CREATE OVERLAY VIDEOS TO ENHANCE ENTERTAINMENT VIDEO GAMES WITH EDUCATIONAL CONTENT
2y 5m to grant Granted Mar 31, 2026
Patent 12587693
ARTIFICIALLY INTELLIGENT AD-BREAK PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574649
ENCODING AND DECODING METHOD, ELECTRONIC DEVICE, COMMUNICATION SYSTEM, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
87%
With Interview (+18.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 1841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month