Prosecution Insights
Last updated: April 19, 2026
Application No. 18/906,007

MIXED REALITY ROBOTIC TRAINING SYSTEM AND METHODS

Non-Final OA §102§103
Filed
Oct 03, 2024
Examiner
OSTROW, ALAN LINDSAY
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
ABB Schweiz AG
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
26 granted / 35 resolved
+22.3% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 35 resolved cases

Office Action

§102 §103
DETAILED ACTION Status of Claims Claims 1-20 are currently pending and have been examined in this application. This Non-final communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 4-7, 11, and 13-14 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Fattey (US 20200306974 A1) Claim 1: Fattey teaches the following limitations: A method of operating a mixed reality robotic training system, the mixed reality robotic training system comprising (Fattey - [0024] The virtual representation 110 can include an optional motion base and 3D virtual reality similar to that found in modern gaming systems. The displayed virtual representation 110 includes of virtual objects of the robot machine 102 and the surrounding environment with full physical feature descriptions, such as trees, rocks, roads, people, and landscapes. As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner….) a first computing device and a mixed reality device, the method comprising: obtaining the first computing device, wherein the first computing device comprises a processor, a non-transitory computer-readable medium having stored thereon instructions executable by the processor, (Fattey – [0004] … at least one robot processor operable to maintain a robot simulation model of the robot machine and the environment surrounding the robot machine, and the robot processor operable to communicate with the at least one sensor and to maneuver the machine body in the environment surrounding the robot machine as a function of the sensed data from the at least one sensor. The system further includes at least one user processor operable to maintain a user simulation model of the robot machine and the environment surrounding the robot machine, the at least one user processor being remote from the robot machine, and the at least one user processor operable to communicate with the robot machine. …) a first program, a first display, and a user input interface; (Fattey – [0004] … The system still further includes at least one user interface including a user control operable to receive user commands and to transmit the user commands to the user simulation model, a display operable to display a virtual representation of the user simulation model, wherein the user simulation model receives user commands from the user interface and outputs virtual representation updates to the user interface based on the received user commands, and wherein the user simulation model receives sensed data from the at least one sensor and outputs robot virtual representation updates to the user interface based on the received sensed data. …) obtaining the mixed reality device, wherein the mixed reality device comprises a camera and a second display; (Fattey – [0024] … As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) receiving, by the first computing device, a first input on the user input interface to control an operation of the mixed reality robot; determining, by the first computing device, a first instruction to control the mixed reality robot based on the first input; (Fattey – [0023] … The user inputs at the user interface 106 are transmitted to both server 104 and robot machine 102. The user inputs in combination with the simulation model 112 and the robot machine model 114 determine movement of the robot machine 102. The transmitted user inputs will travel to robot machine 102 through communication link 108. The time it takes the user inputs to travel to robot machine 102 through communication link 108 will be different (e.g., greater than) the time that a user will perceive the maneuvers on the virtual robot machine in the virtual representation 110 and the time to server 104.) transmitting, by the first computing device, the first instruction to the mixed reality device; and displaying, by the mixed reality device, the operation of the mixed reality robot based on the first instruction. (Fattey – [See Figure1} ; [0024] The virtual representation 110 can include an optional motion base and 3D virtual reality similar to that found in modern gaming systems. The displayed virtual representation 110 includes of virtual objects of the robot machine 102 and the surrounding environment with full physical feature descriptions, such as trees, rocks, roads, people, and landscapes. As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) Examiner Note: Virtual Representation 110 corresponds to Mixed Reality Device Robot Machine Model 114 corresponds to Mixed Reality Robot Simulation Model 112 corresponds to Simulated Robot Robot Machine 102 corresponds to Industrial Robot (See Figure 1 and Figure 2) Claim 2: Fattey teaches the following limitations: The method of claim 1, the method further comprises: displaying, by the first computing device, a first image identifying the first computing device on the first display; capturing, by the mixed reality device, the first image with the camera; and connecting the mixed reality device to the first computing device. (Fattey – [0025] The user interface 106 includes at least one processor, at least one memory operable to store computer program instructions. User interface 106 can also include user controls 122, a display 120, and a haptic feedback device 124. The user controls 122 and the haptic feedback device 124 can be any combination of joysticks, pedals, or application-specific interfaces (including finger and hand interfaces as in surgical robots). Embodiments of user interface 106 are operable to allow a user to manipulate the virtual representation 110 of the robot machine 102 and also is operable to provide haptic feedback to the user through haptic feedback device 124 based on movement of the virtual representation 110 of robot machine 102. User inputs from user interface 106 are transmitted as user command data 140 to server 104 and through communications link 108 to robot machine 102.) Claim 4: Fattey teaches the following limitations: The method of claim 1, wherein determining a first instruction to control the mixed reality robot based on the first input further comprises: determining, by the first computing device, a first instruction to control the operation of the simulated robot; (Fattey -[0028] The sensor feedback data from the plurality of sensors 116 is transmitted from robot machine 102 to the server 104 via communication link 108. The data transfer through communication link 108 results in a time delay between when the data is transmitted by robot machine 102 and when it is received by server 104. After passing through communications link 108, the sensor feedback data interacts with the simulation model 112. … ; [0029] User command data 140, sensor feedback data 142, and state estimation data 144 drive the simulation model 112 including the simulation model 112 robot and the environment surrounding the robot in the simulation model 112 reference frame.) determining, by the first computing device, a second instruction to control the operation of the mixed reality robot based on the first instruction. (Fattey – [0038] …The user command is also sent to server 104 via communication line 150. The robot machine 102 combines the robot simulation model 114, sensor feedback data 142 from the plurality of sensors 116 having information regarding the status of the robot machine 102 and the surrounding environment to determine an updated robot machine model 114. The robot machine 102 with the at least one robot processor 128 and at least one robot memory 130 will determine whether there is a difference between the status of the robot machine 102 and the determined updated robot machine model 114. If there is a difference, the at least one robot processor 128 will cause the robot machine 102 to move such that the robot machine 102 is configured the same as the updated robot machine model 114 and no difference exists. Claim 5: Fattey teaches the following limitations: The method of claim 4, wherein the robotic training system further comprises a second computing device, wherein the second computing device comprises a processor, a non-transitory computer-readable medium having stored thereon instructions executable by the processor, a second program, and wherein the method further comprises: obtaining the second computing device, connecting the first computing device to the second computing device; receiving, by the second computing device, the first instruction from the first computing device; and (Fattey – [See Figure 1 and Figure 2] ; [0045] Computer program instructions 211, 260, 233 are assumed to include program instruction that, when executed by the associated processor 207, 256, 228 enable each device to operate in accordance with embodiments of the present disclosure, as detailed above. In these regards, embodiments of this disclosure may be implemented at least in part by computer software stored on computer-readable memories 258, 209, 230 which is executable by processors 207, 256, 228, or by hardware, or by a combination of tangibly stored software and hardware (and tangibly stored firmware) ; [0046] Various embodiments of the computer readable memory 209, 258, 230 include any data storage technology type which is suitable to the local technical environment, … Various embodiments of the processors 207, 256, 228 include but are not limited to general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and multi-core processors.) displaying, by the second computing device, the operation of the simulated robot in the second program based on the first instruction.(Fattey –[007 … the user simulation model as a virtual representation of a robot machine and an environment surrounding the robot machine, the virtual representation based on the simulation model. …; [0023] The virtual representation 110 reference frame is driven by the simulation model 112 in the simulation model 112 reference frame. The virtual representation 110 provides a display of a “virtual” robot machine and its “virtual” surrounding environment. A user interacts with the virtual representation 110 with user interface 106 through user inputs to the user interface 106. …; [See also Figure 1]) Claim 6: Fattey teaches the following limitations: The method of claim 5, wherein the robotic training system further comprises an industrial robot, and the method further comprises: connecting the industrial robot to the second computing device; controlling, by the second computing device, the operation of the industrial robot based on the first instruction from the first computing device. (Fattey – [0020] Referring to FIG. 1, shown is a signaling diagram of an exemplary system suitable for use in practicing exemplary embodiments of this disclosure. Illustrated in FIG. 1 is teleoperation system 100 including robot machine 102, server 104, user interface 106, and communication link 108 … ; [0021] The robot machine model 114 reference frame describes the physical robot machine 102 as well as the environment surrounding the robot machine and any physical objects surrounding the robot machine 102. The robot machine is operable to perform user-controlled functions and contains a plurality of sensors 116 operable to sense the orientation and location of the robot machine and the physical environment surrounding the robot machine 102. Exemplary sensors 116 include those found in self-driving cars and the typical construction site geosystems.) Claim 7: Fattey teaches the following limitations: The method of claim 5, wherein the method further comprises: receiving, by the second computing device, a second input at the second computing device; determining, by the second computing device, a second instruction to control the operation of the simulated robot; (Fattey -[0028] The sensor feedback data from the plurality of sensors 116 is transmitted from robot machine 102 to the server 104 via communication link 108. The data transfer through communication link 108 results in a time delay between when the data is transmitted by robot machine 102 and when it is received by server 104. After passing through communications link 108, the sensor feedback data interacts with the simulation model 112. … ; [0029] User command data 140, sensor feedback data 142, and state estimation data 144 drive the simulation model 112 including the simulation model 112 robot and the environment surrounding the robot in the simulation model 112 reference frame.) transmitting, by the second computing device, the second instruction to the first computing device to control the operation of the simulated robot and the mixed reality robot; and (Fattey – [0038] …The user command is also sent to server 104 via communication line 150. The robot machine 102 combines the robot simulation model 114, sensor feedback data 142 from the plurality of sensors 116 having information regarding the status of the robot machine 102 and the surrounding environment to determine an updated robot machine model 114. The robot machine 102 with the at least one robot processor 128 and at least one robot memory 130 will determine whether there is a difference between the status of the robot machine 102 and the determined updated robot machine model 114. If there is a difference, the at least one robot processor 128 will cause the robot machine 102 to move such that the robot machine 102 is configured the same as the updated robot machine model 114 and no difference exists. controlling, by the first computing device, the operation of the simulated robot and the mixed reality robot based on the second instruction. (Fattey - [0020] Referring to FIG. 1, shown is a signaling diagram of an exemplary system suitable for use in practicing exemplary embodiments of this disclosure. Illustrated in FIG. 1 is teleoperation system 100 including robot machine 102, server 104, user interface 106, and communication link 108. Teleoperation system 100 includes three reference frames—a virtual representation 110, the simulation model 112, and the robot machine model 114. ; [0022] The simulation model 112 reference frame describes a mathematical model of the robot machine 102 and its surrounding environment. These mathematical models maybe be based on both physics and other phenomenological effects. An optimal state estimator 118 accompanies the simulation models 112 to provide real-time system state estimation and to mitigate drift between the simulation model 112 and the robot machine model 114. …) Claim 11: Fattey teaches the following limitations: A method for remote operation of a mixed reality robot training system, the robotic training system comprising a first computing device and a mixed reality device, (Fattey - [0024] The virtual representation 110 can include an optional motion base and 3D virtual reality similar to that found in modern gaming systems. The displayed virtual representation 110 includes of virtual objects of the robot machine 102 and the surrounding environment with full physical feature descriptions, such as trees, rocks, roads, people, and landscapes. As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner….) the first computing device comprising a processor, a non-transitory computer readable medium having stored thereon instructions executable by the processor, a first program, and a first display, and (Fattey – [0004] … at least one robot processor operable to maintain a robot simulation model of the robot machine and the environment surrounding the robot machine, and the robot processor operable to communicate with the at least one sensor and to maneuver the machine body in the environment surrounding the robot machine as a function of the sensed data from the at least one sensor. The system further includes at least one user processor operable to maintain a user simulation model of the robot machine and the environment surrounding the robot machine, the at least one user processor being remote from the robot machine, and the at least one user processor operable to communicate with the robot machine. …) wherein the mixed reality device comprises camera and a second display, the first computing device and the mixed reality device at a first location, the method comprising: (Fattey – [0024] … As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) receiving, by the first computing device, an input to command a simulated robot of the first program; determining, by the first computing device, a first instruction to control the operation of the simulated robot and (Fattey –[007 … the user simulation model as a virtual representation of a robot machine and an environment surrounding the robot machine, the virtual representation based on the simulation model. …; [0023] The virtual representation 110 reference frame is driven by the simulation model 112 in the simulation model 112 reference frame. The virtual representation 110 provides a display of a “virtual” robot machine and its “virtual” surrounding environment. A user interacts with the virtual representation 110 with user interface 106 through user inputs to the user interface 106. …; [See also Figure 1])a second instruction to control the operation of the mixed reality robot based on the input; controlling, by the first computing device, the operation of the simulated robot based on the first instruction and the operation of the mixed reality robot based on the second instruction; (Fattey – [0023] … The user inputs at the user interface 106 are transmitted to both server 104 and robot machine 102. The user inputs in combination with the simulation model 112 and the robot machine model 114 determine movement of the robot machine 102. The transmitted user inputs will travel to robot machine 102 through communication link 108. The time it takes the user inputs to travel to robot machine 102 through communication link 108 will be different (e.g., greater than) the time that a user will perceive the maneuvers on the virtual robot machine in the virtual representation 110 and the time to server 104.) transmitting, by the first computing device, the second instructions to the mixed reality device to display the operation of the mixed reality robot in a scene of the second display. (Fattey – [See Figure1} ; [0024] The virtual representation 110 can include an optional motion base and 3D virtual reality similar to that found in modern gaming systems. The displayed virtual representation 110 includes of virtual objects of the robot machine 102 and the surrounding environment with full physical feature descriptions, such as trees, rocks, roads, people, and landscapes. As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) Examiner Note: Virtual Representation 110 corresponds to Mixed Reality Device Robot Machine Model 114 corresponds to Mixed Reality Robot Simulation Model 112 corresponds to Simulated Robot Robot Machine 102 corresponds to Industrial Robot (See Figure 1 and Figure 2) Claim 13: Fattey teaches the following limitations: The method of claim 11, wherein the robotic training system further comprises a second computing device, wherein the second computing device comprises a processor, a non-transitory computer readable medium having stored thereon instructions executable by the processor, a second program, and (Fattey – [See Figure 1 and Figure 2] ; [0045] Computer program instructions 211, 260, 233 are assumed to include program instruction that, when executed by the associated processor 207, 256, 228 enable each device to operate in accordance with embodiments of the present disclosure, as detailed above. In these regards, embodiments of this disclosure may be implemented at least in part by computer software stored on computer-readable memories 258, 209, 230 which is executable by processors 207, 256, 228, or by hardware, or by a combination of tangibly stored software and hardware (and tangibly stored firmware) ; [0046] Various embodiments of the computer readable memory 209, 258, 230 include any data storage technology type which is suitable to the local technical environment, … Various embodiments of the processors 207, 256, 228 include but are not limited to general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and multi-core processors.) a third display, and wherein the method further comprises: {Fattey – [0043] … User interface 206 includes a display 220 operable for displaying a virtual representation and user controls 222 for interacting with and manipulating the virtual representation. …) connecting the first computing device to the second computing device; receiving, by the second computing device, a second input to control the simulated robot and the mixed reality robot; determining, by the second computing device, a third instruction to control the operation of the simulated robot based on the input and a fourth instruction to control the operation of the mixed reality robot; transmitting, by the second computing device, the third instruction and fourth instruction to the first computing device; (Fattey - [0040] Upon receipt of the robot machine model 114 and sensor feedback data 142, server 104 will, in combination with the OSE 118, combine the simulation model 112, the sensor feedback data 142, and user commands to determine an updated simulation model 112. The updated simulation model 112 will be transmitted to the robot machine 102 through communications link 108 via communication line 154. The simulation model 112 is also sent to the user interface 106 via communication line 156. The received sensor feedback data 142 is sent from server 104 to user interface 106 via communication line 158.) controlling, by the first computing device, the operation of the simulated robot and the mixed reality robot based on the third and fourth instructions. (Fattey - [0048] Next, at block 308 the user interface manipulates the virtual representation in response to the user inputs. Embodiments of manipulating includes moving the virtual robot machine throughout the virtual environment and/or moving portions of the virtual robot machine (e.g., a robot tool, shovel, hoe, etc.). At block 310, the user interface transmits the user inputs to a robot machine and to a server. At block 312, the user interface receives an update simulation model from the server. The process is then repeated at block 304 except for the fact that the user interface now displays the updated simulation model rather than the simulation model. …) Claim 14: Fattey teaches the following limitations: The method of claim 13, wherein the robotic training system further comprises an industrial robot, and wherein the method further comprises: connecting, by the second computing device, to the industrial robot; wherein transmitting the third instruction to the first computing device further comprises transmitting the third instruction to the industrial robot to control the industrial robot. (Fattey – [0020] Referring to FIG. 1, shown is a signaling diagram of an exemplary system suitable for use in practicing exemplary embodiments of this disclosure. Illustrated in FIG. 1 is teleoperation system 100 including robot machine 102, server 104, user interface 106, and communication link 108 … ; [0021] The robot machine model 114 reference frame describes the physical robot machine 102 as well as the environment surrounding the robot machine and any physical objects surrounding the robot machine 102. The robot machine is operable to perform user-controlled functions and contains a plurality of sensors 116 operable to sense the orientation and location of the robot machine and the physical environment surrounding the robot machine 102. Exemplary sensors 116 include those found in self-driving cars and the typical construction site geosystems.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 12, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fattey (US 20200306974 A1) as modified by Yoshida (US 20200101599 A1) Claim 3: Fattey does not explicitly teach the following limitations, however Yoshida teaches: The method of claim 1, further comprising: displaying, by the mixed reality device, a command log of the first computing device in the scene of the mixed reality robot. (Yoshida – [See Figure 12] ; [0058] After virtual model 22a of the tool is moved to the desired position (e.g., the teaching point) by the above gesture motion, user 12 may perform the air-tap motion, etc., so that model selection menu 50 is switched to a program edit screen 82, as shown in FIG. 12. ) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a program edit screen displaying the log of commands as taught in Yoshida. Providing the user with a view displaying the currently implemented commands during the operation of the mixed reality or virtual robot allows the user to make more informed decisions regarding the safe and efficient operation of the mixed reality robot which in turn improves the operations of the mixed reality, simulated and industrial robot. Claim 12: Fattey does not explicitly teach the following limitations, however Yoshida teaches: The method of claim 11, further comprising: displaying, by the mixed reality device, a log of the instructions of the first program in the scene of the second display. (Yoshida – [See Figure 12] ; [0058] After virtual model 22a of the tool is moved to the desired position (e.g., the teaching point) by the above gesture motion, user 12 may perform the air-tap motion, etc., so that model selection menu 50 is switched to a program edit screen 82, as shown in FIG. 12. ) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a program edit screen displaying the log of commands as taught in Yoshida. Providing the user with a view displaying the currently implemented commands during the operation of the mixed reality or virtual robot allows the user to make more informed decisions regarding the safe and efficient operation of the mixed reality robot which in turn improves the operations of the mixed reality, simulated and industrial robot. Claim 17: Fattey teaches the following limitations: A system comprising: a first computing device, the first computing device comprising: a processor, a non-transitory computer readable medium having stored thereon instructions executable by the processor, (Fattey – [0004] … at least one robot processor operable to maintain a robot simulation model of the robot machine and the environment surrounding the robot machine, and the robot processor operable to communicate with the at least one sensor and to maneuver the machine body in the environment surrounding the robot machine as a function of the sensed data from the at least one sensor. The system further includes at least one user processor operable to maintain a user simulation model of the robot machine and the environment surrounding the robot machine, the at least one user processor being remote from the robot machine, and the at least one user processor operable to communicate with the robot machine. …) a first program, the first program comprising a user input interface, a simulated robot, and (Fattey –[007 … the user simulation model as a virtual representation of a robot machine and an environment surrounding the robot machine, the virtual representation based on the simulation model. …; [0023] The virtual representation 110 reference frame is driven by the simulation model 112 in the simulation model 112 reference frame. The virtual representation 110 provides a display of a “virtual” robot machine and its “virtual” surrounding environment. A user interacts with the virtual representation 110 with user interface 106 through user inputs to the user interface 106. …; [See also Figure 1]) a mixed reality robot, (Fattey – [See Figure1} ; [0024] The virtual representation 110 can include an optional motion base and 3D virtual reality similar to that found in modern gaming systems. The displayed virtual representation 110 includes of virtual objects of the robot machine 102 and the surrounding environment with full physical feature descriptions, such as trees, rocks, roads, people, and landscapes. As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) a mixed reality device, the mixed reality device comprising: camera, a second display; (Fattey – [0024] … As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) wherein the first computing device determines a first instruction to control the simulated robot based on at least one input at the user input interface; (Fattey –[007 … the user simulation model as a virtual representation of a robot machine and an environment surrounding the robot machine, the virtual representation based on the simulation model. …; [0023] The virtual representation 110 reference frame is driven by the simulation model 112 in the simulation model 112 reference frame. The virtual representation 110 provides a display of a “virtual” robot machine and its “virtual” surrounding environment. A user interacts with the virtual representation 110 with user interface 106 through user inputs to the user interface 106. …; [See also Figure 1]) wherein the first computing device determines a second instruction to control the mixed reality robot based on the at least one input; (Fattey – [0023] … The user inputs at the user interface 106 are transmitted to both server 104 and robot machine 102. The user inputs in combination with the simulation model 112 and the robot machine model 114 determine movement of the robot machine 102. The transmitted user inputs will travel to robot machine 102 through communication link 108. The time it takes the user inputs to travel to robot machine 102 through communication link 108 will be different (e.g., greater than) the time that a user will perceive the maneuvers on the virtual robot machine in the virtual representation 110 and the time to server 104.) wherein the first computing device transmits the second instruction to the mixed reality device to display the mixed reality robot on the second display. (Fattey – [See Figure1} ; [0024] The virtual representation 110 can include an optional motion base and 3D virtual reality similar to that found in modern gaming systems. The displayed virtual representation 110 includes of virtual objects of the robot machine 102 and the surrounding environment with full physical feature descriptions, such as trees, rocks, roads, people, and landscapes. As with modern mixed reality systems, the virtual representation 110 combines live images with simulated 3D objects in a seamless manner. Embodiments of virtual representation 110 can be displayed in one or more television displays or in an immersive, wrap-around screens would be a set of virtual reality googles with head and eye tracking.) Fattey does not explicitly teach the following limitations, however Yoshida teaches: a touchscreen display; (Yoshida - [0057] In the above gesture motion, user 12 may open or close own hand, perform the air-tap, pinch the object with the finger, etc. In this regard, user 12 can perform another gesture motion for operating the model. Further, user 12 can perform an operation other than the gesture motion, by using a teach pendant or a pointing device such as a joystick or a touch panel, etc.) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include touchscreens where needed to enhance user interaction with the mixed reality, simulation, and robot control systems as taught in Yoshida. Including touchscreens where needed provides enhanced capabilities for the user to interact with the mixed reality robot, simulation robot, and robot control systems. Examiner Note: Virtual Representation 110 corresponds to Mixed Reality Device Robot Machine Model 114 corresponds to Mixed Reality Robot Simulation Model 112 corresponds to Simulated Robot Robot Machine 102 corresponds to Industrial Robot (See Figure 1 and Figure 2) Claim 18: Fattey teaches the following limitations: The system of claim 17, wherein the system further comprises: an industrial robot; and (Fattey – [0020] Referring to FIG. 1, shown is a signaling diagram of an exemplary system suitable for use in practicing exemplary embodiments of this disclosure. Illustrated in FIG. 1 is teleoperation system 100 including robot machine 102, server 104, user interface 106, and communication link 108 … ) a second computing device; wherein the second computing device comprises: a processor, a non-transitory computer readable medium having stored thereon instructions executable by the processor, a second program wherein the second program comprises a second user input interface; wherein the second computing device comprises being communicatively connected to the first computing device; wherein the first computing device further comprises determining a third instruction based on the at least one input, and wherein the first computing device transmits the third instruction to the second computing device; (Fattey – [See Figure 1 and Figure 2] ; [0045] Computer program instructions 211, 260, 233 are assumed to include program instruction that, when executed by the associated processor 207, 256, 228 enable each device to operate in accordance with embodiments of the present disclosure, as detailed above. In these regards, embodiments of this disclosure may be implemented at least in part by computer software stored on computer-readable memories 258, 209, 230 which is executable by processors 207, 256, 228, or by hardware, or by a combination of tangibly stored software and hardware (and tangibly stored firmware) ; [0046] Various embodiments of the computer readable memory 209, 258, 230 include any data storage technology type which is suitable to the local technical environment, … Various embodiments of the processors 207, 256, 228 include but are not limited to general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and multi-core processors.) wherein the third instructions control an operation of the industrial robot. (Fattey – [0025] … Embodiments of user interface 106 are operable to allow a user to manipulate the virtual representation 110 of the robot machine 102 and also is operable to provide haptic feedback to the user through haptic feedback device 124 based on movement of the virtual representation 110 of robot machine 102. User inputs from user interface 106 are transmitted as user command data 140 to server 104 and through communications link 108 to robot machine 102. ; [0026] Embodiments of robot machine 102 are operable to move and interact with its environment. Robot machine 102 can include at least one robot motor 126, at least one robot processor 128, at least one robot memory 130 operable for storing computer program instructions, a plurality of sensors 116, and at least one robot device 132. The plurality of sensors 116 generate sensor feedback data 142 regarding the environment surrounding robot machine 102, the location of robot machine 102 relative to the surrounding environment and the state, status, and location of the at least one robot device 132.) Claim(s) 8-10 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Fattey (US 20200306974 A1) as modified by Guerin (US 20160257000 A1) Claim 8: Fattey does not explicitly teach the following limitations, however Guerin teaches: The method of claim 1, further comprising: capturing, by the mixed reality device, a plurality of images of an object in the scene of the mixed reality robot and transmitting the plurality of images to the first computing device; {Guerin – [0031] … augmented reality where visual images and/or audio may be introduced or projected into the real-world, and/or augmented virtuality where real world objects are represented in a virtual world. An immersive virtual environment (IVE), Immersive Virtual Robotic Environment (IVRE), or Virtual Reality Environment (VRE) may be any immersive digital workspace. This may include stereoscopic 3D displays on a monitor as well as VR headsets (such as the Oculus Rift′) or augmented reality (AR) devices such as Google Glass′. These may contain 3D models, point clouds, meshes, as well as articulated 3D avatars or proxies for both the user or any other kinematic system (such as a robot).) determining, by the first computing device, an interaction between the object and the mixed reality robot based on the plurality of images; (Guerin - [0044] FIG. 2 uses, for example, Oculus Rift′, which is a head-mounted display (HMD) that includes head tracking for virtual environments, similar tools may be used. FIG. 2 depicts virtual environment 200 (e.g., IVRE) containing a hand avatar 220, selectable options, and virtual robot 210.) determining, by the first computing device, a third instruction based on an interaction; controlling, by the first computing device, the operation of the simulated robot and the operation of the mixed reality robot based on the third instruction; displaying, by the mixed reality device, the operation of the mixed reality robot based on the third instruction. (Guerin - [0045] In an embodiment, an IVRE may provide, for example, the ability to control (e.g., move in real-time or near real time), program (e.g., assign movement patterns for later execution), or collaborate (e.g., interact with autonomous robot behavior) with a robotic system (either remote or locally) using a VRE via, for example, a 3D avatar of the robotic system 210. Further, the ability to, in a VRE, use a combination of first person or third person user perspective to control, program or collaborate with the robot 210. For example, the first person mode may involve moving a robot's end effector around with the appearance to the user that the robot's “hand” is their hand. In this case, the user's avatar would be co-located with the robot's graphical representation in a manner to suggest the two are as one. Third person mode may involve grabbing the robot's effector and moving it around as if the user was manipulation a tool.) Examiner Note: In the examination of claims 8-10 the following correspondence applies: A representation of the users hand corresponds to Object A gesture of the hand corresponds to Interaction The defined edge of a robot model corresponds to boundary Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a means of interacting with robot models and other items of interest in the mixed reality environment by simulating a human hand which can select and move items in the mixed reality system as taught in Guerin. Having the ability to simulate a human hand (such as the user’s hand) that can interact with the mixed reality robot and other surrounding items allows the user to more accurately and intuitively make changes to robot operations via the mixed reality environment. Claim 9: Fattey does not explicitly teach the following limitations, however Guerin teaches: The method of claim 8, further comprising: determining, by the first computing device, a boundary associated with the mixed reality robot, displaying, by the mixed reality device, the boundary when the object approaches the boundary; (Guerin – [0056] … For example, when the user places his or her hand avatar 220 near an interactive entity, the distance will be calculated between the users hand avatar and the entity. Should that distance be below some threshold, (e.g. such as the radius of the sphere enclosing the entity) that entity will be “selected”, which is displayed as a change in color or transparency, change in entity shape, or configuration. Selection can also occur via or via “picking” (i.e. GL pick) where a entity is considered selected when the user's avatar occludes the entity from the visual perspective of the user. Selected items, for example, can then be “grabbed” or “pressed” by activating a button on the controller (e.g., Hydra Wand™). …) wherein determining the interaction between the object and the mixed reality robot comprises determining the object interaction with the boundary. (Guerin – [0056] … In another embodiment, entities can be selected by a ray emanating from the user's hand avatar. When this ray intersects the entity, that entity is now “selected”. Additionally, an entity can be selected by a rigid body collision of the user's hand avatar and the entity, assuming the environment supports collision detection. ; [0070] In another embodiment, to address the transfer of task knowledge (such as motion trajectories) to robot 110 via the user 390 (e.g., 340 in FIG. 3), as well as continuous control signals to robot 110 (as during teleoperation), IVRE supports simulated kinesthetic control of robot 110 via the proxy robot 210. To move robot 110, the user places his or her virtual hand 220 within a specific radius of interaction around the robot's 210 end effector (marked by a translucent sphere). … ) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a means of interacting with robot models and other items of interest in the mixed reality environment by simulating a human hand which can select and move items in the mixed reality system as taught in Guerin. Having the ability to simulate a human hand (such as the user’s hand) that can interact with the mixed reality robot and other surrounding items allows the user to more accurately and intuitively make changes to robot operations via the mixed reality environment. Claim 10: Fattey does not explicitly teach the following limitations, however Guerin teaches: The method of claim 8, further comprising: wherein the object comprises a hand of a user; and (Guerin - [0044] … FIG. 2 depicts virtual environment 200 (e.g., IVRE) containing a hand avatar 220, selectable options, and virtual robot 210.)wherein the interaction comprises a hand gesture. (Guerin - [0070] … The user can also move the robot using input from a VUI, or from gestures made in the virtual environment, such as a “pushing away” gesture that causes the robot to retreat from the user…. ) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a means of interacting with robot models and other items of interest in the mixed reality environment by simulating a human hand which can select and move items in the mixed reality system as taught in Guerin. Having the ability to simulate a human hand (such as the user’s hand) that can interact with the mixed reality robot and other surrounding items allows the user to more accurately and intuitively make changes to robot operations via the mixed reality environment. Claim 15: Fattey does not explicitly teach the following limitations, however Guerin teaches: The method of claim 11, wherein receiving, by the first computing device, the input to command the simulated robot of the first program further comprises: determining, by the first computing device, a boundary associated with the mixed reality robot; (Guerin - [0045] In an embodiment, an IVRE may provide, for example, the ability to control (e.g., move in real-time or near real time), program (e.g., assign movement patterns for later execution), or collaborate (e.g., interact with autonomous robot behavior) with a robotic system (either remote or locally) using a VRE via, for example, a 3D avatar of the robotic system 210. Further, the ability to, in a VRE, use a combination of first person or third person user perspective to control, program or collaborate with the robot 210. For example, the first person mode may involve moving a robot's end effector around with the appearance to the user that the robot's “hand” is their hand. In this case, the user's avatar would be co-located with the robot's graphical representation in a manner to suggest the two are as one. Third person mode may involve grabbing the robot's effector and moving it around as if the user was manipulation a tool.) receiving, by the first computing device, a plurality of images of an object and the mixed reality robot captured by the camera; (Guerin - [0044] FIG. 2 uses, for example, Oculus Rift′, which is a head-mounted display (HMD) that includes head tracking for virtual environments, similar tools may be used. FIG. 2 depicts virtual environment 200 (e.g., IVRE) containing a hand avatar 220, selectable options, and virtual robot 210.) determining, by the first computing device, an interaction between the object and the boundary. (Guerin – [0056] … In another embodiment, entities can be selected by a ray emanating from the user's hand avatar. When this ray intersects the entity, that entity is now “selected”. Additionally, an entity can be selected by a rigid body collision of the user's hand avatar and the entity, assuming the environment supports collision detection. ; [0070] In another embodiment, to address the transfer of task knowledge (such as motion trajectories) to robot 110 via the user 390 (e.g., 340 in FIG. 3), as well as continuous control signals to robot 110 (as during teleoperation), IVRE supports simulated kinesthetic control of robot 110 via the proxy robot 210. To move robot 110, the user places his or her virtual hand 220 within a specific radius of interaction around the robot's 210 end effector (marked by a translucent sphere). … ) Examiner Note: In the examination of claims 15-16 the following correspondence applies: A representation of the users hand corresponds to Object A gesture of the hand corresponds to Interaction The defined edge of a robot model corresponds to boundary Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a means of interacting with robot models and other items of interest in the mixed reality environment by simulating a human hand which can select and move items in the mixed reality system as taught in Guerin. Having the ability to simulate a human hand (such as the user’s hand) that can interact with the mixed reality robot and other surrounding items allows the user to more accurately and intuitively make changes to robot operations via the mixed reality environment. Claim 16: Fattey does not explicitly teach the following limitations, however Guerin teaches: The method of claim 15, wherein the object comprises a hand of a user, and (Guerin - [0044] … FIG. 2 depicts virtual environment 200 (e.g., IVRE) containing a hand avatar 220, selectable options, and virtual robot 210.) wherein the interaction comprises a hand gesture. (Guerin - [0070] … The user can also move the robot using input from a VUI, or from gestures made in the virtual environment, such as a “pushing away” gesture that causes the robot to retreat from the user…. ) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey to include a means of interacting with robot models and other items of interest in the mixed reality environment by simulating a human hand which can select and move items in the mixed reality system as taught in Guerin. Having the ability to simulate a human hand (such as the user’s hand) that can interact with the mixed reality robot and other surrounding items allows the user to more accurately and intuitively make changes to robot operations via the mixed reality environment. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Fattey (US 20200306974 A1) as modified by Yoshida (US 20200101599 A1) In view of Kelch (US 20220388171 A1) Claim 19: Fattey in combination with Yoshida does not explicitly teach the following limitations, however Kelch teaches: The system of claim 17, further comprising: a first calibration marker disposed at a first position in a scene of the user; and a second calibration marker disposed at a second position in a scene of the user; wherein the first position and the second position comprise being positioned a fixed distance apart, the fixed distance determined by the first computing device; (Kelch - [0033] …By way of example, if the object 136a in the workcell 130 is a table, the robot arm 131 can interact with the four corners of the tabletop, and on the basis of the locations of the points of contact at each corner, and the force measurements at each location, generate a representation of the entire plane of the tabletop. The engine 160 can generate a marker in the representation for each of the four corners of the tabletop and connect the markers by a line, thereby generating a depiction of the plane of the tabletop….) wherein the at least one input comprises initiating a calibration of (Kelch - [0025] … In response to the task command 175, the execution engine 170 can execute a calibration program that instructs the robot arm 131 to, e.g., move within the workcell 130 and record locations in the workcell 130 at which the force sensor 135 of the robot arm 131 makes a contact with the object 136. …) the simulated robot and the mixed reality robot; wherein the mixed reality robot moves from the first position to the second position, the first computing device calibration the simulated robot and the mixed reality robot based on the mixed reality robot moving from the first position to the second position. (Kelch- [0035] … The workcell simulation 140 can include any of: one or more virtual robots 141 that represent the one or more physical robots (e.g., the robot arm 131, or other robots) in the workcell 130, one or more virtual objects 146 that represent the one or more physical objects 136 in the workcell 130, and one or more virtual sensors that represent physical sensors (e.g., the force sensor 135, or other sensors) in the workcell 130. The workcell simulation 140 can be simulated according to one or more simulation parameters. The simulation parameters can include parameters that define dimensions, or other physical parameters, of any of: the one or more virtual robots 141, …) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey and Yoshida to include a system of virtual environment calibration which includes having a virtual robot trace and measure virtual markers in a virtual environment as taught in Kelch. Having the ability to calibrate the simulated robot environment with the mixed reality robot environment ensures the accurate and coordinated movement of the robot as represented in mixed reality, simulation and ultimately in the real industrial environment. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Fattey (US 20200306974 A1) as modified by Yoshida (US 20200101599 A1) In view of Guerin (US 20160257000 A1) Claim 20: Fattey in combination with Yoshida does not explicitly teach the following limitations, however Guerin teaches: The system of claim 19, further comprising: wherein the mixed reality robot comprises being defined by a boundary; and wherein an interaction between an object and the boundary comprises the at least one input. (Guerin – [0056] … In another embodiment, entities can be selected by a ray emanating from the user's hand avatar. When this ray intersects the entity, that entity is now “selected”. Additionally, an entity can be selected by a rigid body collision of the user's hand avatar and the entity, assuming the environment supports collision detection. ; [0070] In another embodiment, to address the transfer of task knowledge (such as motion trajectories) to robot 110 via the user 390 (e.g., 340 in FIG. 3), as well as continuous control signals to robot 110 (as during teleoperation), IVRE supports simulated kinesthetic control of robot 110 via the proxy robot 210. To move robot 110, the user places his or her virtual hand 220 within a specific radius of interaction around the robot's 210 end effector (marked by a translucent sphere). … ) Examiner Note: In the examination of claim 20 the following correspondence applies: A representation of the users hand corresponds to Object A gesture of the hand corresponds to Interaction The defined edge of a robot model corresponds to boundary Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Fattey and Yoshida to include a means of interacting with robot models and other items of interest in the mixed reality environment by simulating a human hand which can select and move items in the mixed reality system as taught in Guerin. Having the ability to simulate a human hand (such as the user’s hand) that can interact with the mixed reality robot and other surrounding items allows the user to more accurately and intuitively make changes to robot operations via the mixed reality environment. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Wang (US 20190202055 A1) describes a robot training system comprising a mixed reality display device structured to superimpose a virtual scene on a real-world view of a real-world scene including a plurality of physical objects including an industrial robot, a video input device, and a computing device. Yamazaki (US 20200133002 A1) describes a display system which displays an image on a scene in a real space in an overlapped manner. The system causes a processor to display a guide image indicating a direction set based on a robot to correspond to a robot arranged in the real space. Svensson (US 20100262288 A1) describes a method and a system for facilitating calibration of a robot cell including one or more objects (8) and an industrial robot (1,2,3) performing work in connection to the objects, wherein the robot cell is programmed by means of an off-line programming tool including a graphical component for generating 2D or 3D graphics based on graphical models of the objects. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN LINDSAY OSTROW whose telephone number is (703)756-1854. The examiner can normally be reached M-F 8 - 5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN LINDSAY OSTROW/Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Oct 03, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583119
TRANSFER SYSTEM AND TRANSFER METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12576525
ROBOT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12569989
ESTIMATION DEVICE, ESTIMATION METHOD, ESTIMATION PROGRAM, AND ROBOT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12539611
ROBOT CONTROL APPARATUS, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12491627
INFORMATION PROCESSING APPARATUS AND COOKING SYSTEM
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+37.7%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 35 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month