DETAILED ACTION
Claims 1-20 are presented for examination. This office action is response to the submission on 11/14/2022.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/14/2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings filed on 11/14/2022 are acceptable for examination proceedings.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5, 7-8, 12, 14-15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gallala, A.; Kumar, A.A.; Hichri, B.; Plapper, P. Digital Twin for Human–Robot Interactions by Means of Industry 4.0 Enabling Technologies. Sensors 2022, 22, 4950. https://doi.org/10.3390/s22134950 (hereinafter referred to as “Gallala”), in view of Kuts, V., Modoni, G.E., Terkaj, W., Tähemaa, T., Sacco, M., Otto, T. (2017). Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell (hereinafter referred to as “Kuts”).
Claim 1:
Gallala teaches “A method for evaluating remote commands, the method comprising: receiving data for a physical ecosystem; generating a digital twin of the physical ecosystem based on the data received;” (Gallala teaches a human robot interaction method which includes collecting data from equipment in order to create a merged real and virtual world including a digital twin of the robot in Gallala [Page 6 Section 4 - Page 7] "The proposed DT-HRI method aims to improve upon the current human–robot interactions, robot simulation and robot programming method due to the use of emerging technologies that can be used to fit into smart factory exigencies and demands. For this use case, the system was composed of two pieces of physical equipment: a collaborative robot and an MR head-mounted device (MR-HMD). The physical world consisted of human beings, one or more collaborative robots, an MR-HMD for each user and, when needed, application-related objects (e.g., pieces to assemble or objects to pick and place). The digital world, on the other hand, contained the digital twin model of the robot, additional virtual objects and the user interactive interface (UII). The digital world could recognize and consider human gestures, voices and movements. A third-party engine was responsible for the data collection and processing, along with the generation of the algorithm.The HRI-DT framework is illustrated in Figure 2. After connecting the different systems, data were collected from the equipment, sensors and devices, as described in Table 2. After that, a mixed world that recognized the physical robot, related objects and human gestures was created by merging real and virtual worlds, which projected the digital twin, the immersive user interactive interface and a selection of virtual objects."),
“identifying one or more correlated activities using the digital twin;” (Gallala teaches that during step four of the digital twin framework, it processes commands and the data collected from sensors in order to generate motions, trajectories, and visual feedback in Gallala [Page 6, second paragraph] "Step Four (“Process”): After setting up all of the environmental components, communication and data collection and the combination of both worlds into one environment, the system is ready to operate. The Process step is for the analytical and processing brain of the system, as the name indicates. At this level, data processing and algorithm generation occurs, based on user behavior and objectives. It has two input resources: the data that are collected from sensors and objects and stored and the data that are received from the intentions and behaviors of the user. As an output, this step translates commands into the specified language of each component and generates motions, trajectories and visual feedback. It requires high computational capabilities, an embedded platform and AI-based functionalities. Meanwhile, it also uses an IoT bridge for internal and external data transmission and exchange."; Gallala teaches an autonomous decision maker which generates a robot trajectory after collecting data i.e. the trajectory is correlated with a command for movement in Gallala [Page 11, paragraphs 3-4] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45].An AI-based algorithm was integrated into the system and used in the use case. The selected algorithm was a deep conventional neural network (DCNN), which was adapted from [46]. It aimed to train the Cartesian position and orientation of the virtual objects during simulation using domain randomization in order to execute simple pick and place tasks. Since simulation was executed in the real world, the environmental parameters, as well as surrounding objects, could change at any time. Thus, the DCNN domain randomization-based method was optimal to cover as many of the attainable scenarios as possible. Two phases were required to implement this method: (1) the collection of a large amount of data from the simulation; (2) the training of the DCNN model."
[AltContent: rect]
PNG
media_image1.png
335
760
media_image1.png
Greyscale
),
“receiving a (Gallala teaches the processor receiving the intentions of the user i.e. a command in Gallala [Page 7, second bullet point] "Autonomous Decision-Making: A broker–processor received the intentions of the user, analyzed the environmental status of both worlds and then generated algorithms for either an action to be taken by a physical entity or feedback to be shown to the user. The decision that was made was first simulated in the digital world before being transmitted to a physical entity after approval."), and
“and determining whether to execute the remote command by simulating the remote command using the digital twin and the one or more correlated activities.” (Gallala teaches that a simulation of the planned movements will be executed and once the operator agrees, it can be validated by the user i.e. the user determines whether to execute the command in Gallala [Page 13, paragraphs 1-3] "Simulation: The simulation phase is not mandatory, but it is one of the benefits of the proposed method. Users could visualize the planned movements being executed on the virtual robot before they were transmitted to the real robot. Simulation is beneficial while working in hybrid teams. It reduces the defects of the robot and nearby equipment and maintains operator safety. It also permits real-time simulation in real environments, which cannot be achieved through traditional simulation methods that use monitors or VR-based methods.7. Validation: Once the operator agreed to a simulation, it could be validated by pressing the “Apply Movements” button on the interface. Validation meant that the simulated movements could be transmitted and executed by the real robot in the real world.8. Outcome Transmission: After the validation, the simulated outcomes were transferred to the broker using ROSbridge, where they were processed and transformed into robotic commands.").
Gallala does not appear to explicitly teach “receiving a remote command;” However, Kuts does teach this claim limitation (Kuts teaches using VR tools to control the robotic system remotely in Kuts [Page 220] "Remote Online Monitoring of the Robot Cell. Leveraging the VR tools, the robotic system can be accessed remotely from any geographical point, giving the control over the processes. Operators exploiting clothes/glasses/tools RFID sensors can move around the robot cell with their ordinary daily routine. Data from the sensors are transferred towards the VR environment. In order to have an update about the presence information of the operators, machine vision cameras and laser scanners are also being used.").
Gallala and Kuts are analogous art because they are from the same field of endeavor of using virtual/mixed reality to monitor and control manufacturing equipment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having teachings of Gallala and Kuts before him/her, to modify the teachings of a Digital Twin for Human–Robot Interactions of Gallala to include the remote control of Kuts because adding the Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell of Kuts would allow for evaluating system reconfigurations and would allow for a user to remotely control the system as described in Kuts [Page 213] " One the technologies that can benefit from the DT is Virtual Reality (VR), which provides a virtual and realistic view of the environment where the flow of real-time and historical data is integrated with the human presence. In particular, if VR is integrated and connected with the DT, it can be exploited to:
• evaluate possible system reconfigurations via simulation (passive mode);
• remotely control the system (active mode).”
Claim 5:
Gallala in view of Kuts teaches “The method of claim 1, wherein simulating the remote command using the digital twin utilizes one or more machine learning algorithms.” (Gallala teaches that a neural network is used to determine the robot trajectory in Gallala [Page 11, paragraphs 3-4] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45].An AI-based algorithm was integrated into the system and used in the use case. The selected algorithm was a deep conventional neural network (DCNN), which was adapted from [46]. It aimed to train the Cartesian position and orientation of the virtual objects during simulation using domain randomization in order to execute simple pick and place tasks. Since simulation was executed in the real world, the environmental parameters, as well as surrounding objects, could change at any time. Thus, the DCNN domain randomization-based method was optimal to cover as many of the attainable scenarios as possible. Two phases were required to implement this method: (1) the collection of a large amount of data from the simulation; (2) the training of the DCNN model.").
Claim 7:
Gallala in view of Kuts teaches “The method of claim 1, wherein the remote command is received from a user using a user interface, the user being a remote worker.” (Kuts teaches using VR tools to control the robotic system remotely in Kuts [Page 220] "Remote Online Monitoring of the Robot Cell. Leveraging the VR tools, the robotic system can be accessed remotely from any geographical point, giving the control over the processes. Operators exploiting clothes/glasses/tools RFID sensors can move around the robot cell with their ordinary daily routine. Data from the sensors are transferred towards the VR environment. In order to have an update about the presence information of the operators, machine vision cameras and laser scanners are also being used.").
Claim 8:
Gallala teaches “A computer system for evaluating remote commands, comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories” (Gallala teaches that the autonomous decision-maker broker is running on a separate computer i.e. the system for receiving data, simulating the movements, and validation of the movements are performed on a computer which has a processor, memory, a storage medium, and program instructions in Gallala [Page 11, second paragraph] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45]."),
“wherein the computer system is capable of performing a method comprising: receiving data for a physical ecosystem; generating a digital twin of the physical ecosystem based on the data received;” (Gallala teaches a human robot interaction method which includes collecting data from equipment in order to create a merged real and virtual world including a digital twin of the robot in Gallala [Page 6 Section 4 - Page 7] "The proposed DT-HRI method aims to improve upon the current human–robot interactions, robot simulation and robot programming method due to the use of emerging technologies that can be used to fit into smart factory exigencies and demands. For this use case, the system was composed of two pieces of physical equipment: a collaborative robot and an MR head-mounted device (MR-HMD). The physical world consisted of human beings, one or more collaborative robots, an MR-HMD for each user and, when needed, application-related objects (e.g., pieces to assemble or objects to pick and place). The digital world, on the other hand, contained the digital twin model of the robot, additional virtual objects and the user interactive interface (UII). The digital world could recognize and consider human gestures, voices and movements. A third-party engine was responsible for the data collection and processing, along with the generation of the algorithm.The HRI-DT framework is illustrated in Figure 2. After connecting the different systems, data were collected from the equipment, sensors and devices, as described in Table 2. After that, a mixed world that recognized the physical robot, related objects and human gestures was created by merging real and virtual worlds, which projected the digital twin, the immersive user interactive interface and a selection of virtual objects."),
“identifying one or more correlated activities using the digital twin;” (Gallala teaches that during step four of the digital twin framework, it processes commands and the data collected from sensors in order to generate motions, trajectories, and visual feedback in Gallala [Page 6, second paragraph] "Step Four (“Process”): After setting up all of the environmental components, communication and data collection and the combination of both worlds into one environment, the system is ready to operate. The Process step is for the analytical and processing brain of the system, as the name indicates. At this level, data processing and algorithm generation occurs, based on user behavior and objectives. It has two input resources: the data that are collected from sensors and objects and stored and the data that are received from the intentions and behaviors of the user. As an output, this step translates commands into the specified language of each component and generates motions, trajectories and visual feedback. It requires high computational capabilities, an embedded platform and AI-based functionalities. Meanwhile, it also uses an IoT bridge for internal and external data transmission and exchange."; Gallala teaches an autonomous decision maker which generates a robot trajectory after collecting data i.e. the trajectory is correlated with a command for movement in Gallala [Page 11, paragraphs 3-4] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45].An AI-based algorithm was integrated into the system and used in the use case. The selected algorithm was a deep conventional neural network (DCNN), which was adapted from [46]. It aimed to train the Cartesian position and orientation of the virtual objects during simulation using domain randomization in order to execute simple pick and place tasks. Since simulation was executed in the real world, the environmental parameters, as well as surrounding objects, could change at any time. Thus, the DCNN domain randomization-based method was optimal to cover as many of the attainable scenarios as possible. Two phases were required to implement this method: (1) the collection of a large amount of data from the simulation; (2) the training of the DCNN model."),
“receiving a (Gallala teaches the processor receiving the intentions of the user i.e. a command in Gallala [Page 7, second bullet point] "Autonomous Decision-Making: A broker–processor received the intentions of the user, analyzed the environmental status of both worlds and then generated algorithms for either an action to be taken by a physical entity or feedback to be shown to the user. The decision that was made was first simulated in the digital world before being transmitted to a physical entity after approval."), and
“and determining whether to execute the remote command by simulating the remote command using the digital twin and the one or more correlated activities.” (Gallala teaches that a simulation of the planned movements will be executed and once the operator agrees, it can be validated by the user i.e. the user determines whether to execute the command in Gallala [Page 13, paragraphs 1-3] "Simulation: The simulation phase is not mandatory, but it is one of the benefits of the proposed method. Users could visualize the planned movements being executed on the virtual robot before they were transmitted to the real robot. Simulation is beneficial while working in hybrid teams. It reduces the defects of the robot and nearby equipment and maintains operator safety. It also permits real-time simulation in real environments, which cannot be achieved through traditional simulation methods that use monitors or VR-based methods.7. Validation: Once the operator agreed to a simulation, it could be validated by pressing the “Apply Movements” button on the interface. Validation meant that the simulated movements could be transmitted and executed by the real robot in the real world.8. Outcome Transmission: After the validation, the simulated outcomes were transferred to the broker using ROSbridge, where they were processed and transformed into robotic commands.").
Gallala does not appear to explicitly teach “receiving a remote command;” However, Kuts does teach this claim limitation (Kuts teaches using VR tools to control the robotic system remotely in Kuts [Page 220] "Remote Online Monitoring of the Robot Cell. Leveraging the VR tools, the robotic system can be accessed remotely from any geographical point, giving the control over the processes. Operators exploiting clothes/glasses/tools RFID sensors can move around the robot cell with their ordinary daily routine. Data from the sensors are transferred towards the VR environment. In order to have an update about the presence information of the operators, machine vision cameras and laser scanners are also being used.").
Gallala and Kuts are analogous art because they are from the same field of endeavor of using virtual/mixed reality to monitor and control manufacturing equipment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having teachings of Gallala and Kuts before him/her, to modify the teachings of a Digital Twin for Human–Robot Interactions of Gallala to include the remote control of Kuts because adding the Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell of Kuts would allow for evaluating system reconfigurations and would allow for a user to remotely control the system as described in Kuts [Page 213] " One the technologies that can benefit from the DT is Virtual Reality (VR), which provides a virtual and realistic view of the environment where the flow of real-time and historical data is integrated with the human presence. In particular, if VR is integrated and connected with the DT, it can be exploited to:
• evaluate possible system reconfigurations via simulation (passive mode);
• remotely control the system (active mode).”
Claim 12:
The limitations of claim 12 are substantially the same as claim 5 and it is rejected for the same reasons.
Claim 14:
The limitations of claim 14 are substantially the same as claim 7 and it is rejected for the same reasons.
Claim 15:
Gallala teaches “A computer program product for evaluating remote commands, comprising: one or more non-transitory computer-readable storage media and program instructions stored on at least one of the one or more tangible storage media,” (Gallala teaches that the autonomous decision-maker broker is running on a separate computer i.e. the system for receiving data, simulating the movements, and validation of the movements are performed on a computer which has program instructions stored on a storage media in Gallala [Page 11, second paragraph] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45]."),
“the program instructions executable by a processor to cause the processor to perform a method comprising: receiving data for a physical ecosystem; generating a digital twin of the physical ecosystem based on the data received;” (Gallala teaches that the autonomous decision-maker broker is running on a separate computer i.e. the system for receiving data, simulating the movements, and validation of the movements are performed on a computer which has a processor in Gallala [Page 11, second paragraph] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45]."; Gallala teaches a human robot interaction method which includes collecting data from equipment in order to create a merged real and virtual world including a digital twin of the robot in Gallala [Page 6 Section 4 - Page 7] "The proposed DT-HRI method aims to improve upon the current human–robot interactions, robot simulation and robot programming method due to the use of emerging technologies that can be used to fit into smart factory exigencies and demands. For this use case, the system was composed of two pieces of physical equipment: a collaborative robot and an MR head-mounted device (MR-HMD). The physical world consisted of human beings, one or more collaborative robots, an MR-HMD for each user and, when needed, application-related objects (e.g., pieces to assemble or objects to pick and place). The digital world, on the other hand, contained the digital twin model of the robot, additional virtual objects and the user interactive interface (UII). The digital world could recognize and consider human gestures, voices and movements. A third-party engine was responsible for the data collection and processing, along with the generation of the algorithm.The HRI-DT framework is illustrated in Figure 2. After connecting the different systems, data were collected from the equipment, sensors and devices, as described in Table 2. After that, a mixed world that recognized the physical robot, related objects and human gestures was created by merging real and virtual worlds, which projected the digital twin, the immersive user interactive interface and a selection of virtual objects."),
“identifying one or more correlated activities using the digital twin;” (Gallala teaches that during step four of the digital twin framework, it processes commands and the data collected from sensors in order to generate motions, trajectories, and visual feedback in Gallala [Page 6, second paragraph] "Step Four (“Process”): After setting up all of the environmental components, communication and data collection and the combination of both worlds into one environment, the system is ready to operate. The Process step is for the analytical and processing brain of the system, as the name indicates. At this level, data processing and algorithm generation occurs, based on user behavior and objectives. It has two input resources: the data that are collected from sensors and objects and stored and the data that are received from the intentions and behaviors of the user. As an output, this step translates commands into the specified language of each component and generates motions, trajectories and visual feedback. It requires high computational capabilities, an embedded platform and AI-based functionalities. Meanwhile, it also uses an IoT bridge for internal and external data transmission and exchange."; Gallala teaches an autonomous decision maker which generates a robot trajectory after collecting data i.e. the trajectory is correlated with a command for movement in Gallala [Page 11, paragraphs 3-4] "The autonomous decision-maker broker, which is presented in Figure 4, comprised an ROS-based system running on a separate computer and had several roles. It was composed of different ROS nodes, one for each service. The motion planning node trained the models to generate the robot trajectory after collecting data, using AI algorithms and the MoveIt Motion Planner [45].An AI-based algorithm was integrated into the system and used in the use case. The selected algorithm was a deep conventional neural network (DCNN), which was adapted from [46]. It aimed to train the Cartesian position and orientation of the virtual objects during simulation using domain randomization in order to execute simple pick and place tasks. Since simulation was executed in the real world, the environmental parameters, as well as surrounding objects, could change at any time. Thus, the DCNN domain randomization-based method was optimal to cover as many of the attainable scenarios as possible. Two phases were required to implement this method: (1) the collection of a large amount of data from the simulation; (2) the training of the DCNN model."),
“receiving a (Gallala teaches the processor receiving the intentions of the user i.e. a command in Gallala [Page 7, second bullet point] "Autonomous Decision-Making: A broker–processor received the intentions of the user, analyzed the environmental status of both worlds and then generated algorithms for either an action to be taken by a physical entity or feedback to be shown to the user. The decision that was made was first simulated in the digital world before being transmitted to a physical entity after approval."), and
“and determining whether to execute the remote command by simulating the remote command using the digital twin and the one or more correlated activities.” (Gallala teaches that a simulation of the planned movements will be executed and once the operator agrees, it can be validated by the user i.e. the user determines whether to execute the command in Gallala [Page 13, paragraphs 1-3] "Simulation: The simulation phase is not mandatory, but it is one of the benefits of the proposed method. Users could visualize the planned movements being executed on the virtual robot before they were transmitted to the real robot. Simulation is beneficial while working in hybrid teams. It reduces the defects of the robot and nearby equipment and maintains operator safety. It also permits real-time simulation in real environments, which cannot be achieved through traditional simulation methods that use monitors or VR-based methods.7. Validation: Once the operator agreed to a simulation, it could be validated by pressing the “Apply Movements” button on the interface. Validation meant that the simulated movements could be transmitted and executed by the real robot in the real world.8. Outcome Transmission: After the validation, the simulated outcomes were transferred to the broker using ROSbridge, where they were processed and transformed into robotic commands.").
Gallala does not appear to explicitly teach “receiving a remote command;” However, Kuts does teach this claim limitation (Kuts teaches using VR tools to control the robotic system remotely in Kuts [Page 220] "Remote Online Monitoring of the Robot Cell. Leveraging the VR tools, the robotic system can be accessed remotely from any geographical point, giving the control over the processes. Operators exploiting clothes/glasses/tools RFID sensors can move around the robot cell with their ordinary daily routine. Data from the sensors are transferred towards the VR environment. In order to have an update about the presence information of the operators, machine vision cameras and laser scanners are also being used.").
Gallala and Kuts are analogous art because they are from the same field of endeavor of using virtual/mixed reality to monitor and control manufacturing equipment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having teachings of Gallala and Kuts before him/her, to modify the teachings of a Digital Twin for Human–Robot Interactions of Gallala to include the remote control of Kuts because adding the Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell of Kuts would allow for evaluating system reconfigurations and would allow for a user to remotely control the system as described in Kuts [Page 213] "One the technologies that can benefit from the DT is Virtual Reality (VR), which provides a virtual and realistic view of the environment where the flow of real-time and historical data is integrated with the human presence. In particular, if VR is integrated and connected with the DT, it can be exploited to:
• evaluate possible system reconfigurations via simulation (passive mode);
• remotely control the system (active mode).”
Claim 19:
The limitations of claim 19 are substantially the same as claim 5 and it is rejected for the same reasons.
Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gallala, A.; Kumar, A.A.; Hichri, B.; Plapper, P. Digital Twin for Human–Robot Interactions by Means of Industry 4.0 Enabling Technologies. Sensors 2022, 22, 4950. https://doi.org/10.3390/s22134950 (hereinafter referred to as “Gallala”), in view of Kuts, V., Modoni, G.E., Terkaj, W., Tähemaa, T., Sacco, M., Otto, T. (2017). Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell (hereinafter referred to as “Kuts”), further in view of Deutsch et al. (US20200183717A1).
Claim 2:
Gallala in view of Kuts teaches “The method of claim 1” as described above. Neither Gallala or Kuts appear to explicitly teach “further comprising: determining the remote command cannot be executed based on the simulation of the remote command using the digital twin; notifying a user the remote command cannot be executed; and providing one or more recommendations to the user.” However, Deutsch does teach this claim limitation (Deutsch teaches a digital twin may generate an alert based on a change in operating characteristics of an asset i.e. a remote command and that the digital twin may provide suggestions on actions to take to resolve the issue in Deutsch [0039] "For example, a digital twin may generate an alert or other warning based on a change in operating characteristics of the asset. The alert may be due to an issue with a component of the asset. In addition to the alert, the contextual digital twin may generate context that is associated with the alert. For example, the contextual digital twin may determine similar issues that have previously occurred with the asset, provide a description of what caused those similar issues, what was done to address the issues, and differences between the current issue and the previous issues, and the like. As another example, the context can provide suggestions about actions to take to resolve the current issue."; Deutsch teaches inputting a command in Deutsch [0057] "Furthermore, an operation of the assets 110 may be enhanced or otherwise controlled by a user inputting commands though an application hosted by the cloud platform 120 or other remote host platform such as a web server. The data provided from the assets 110 may include time-series data or other types of data associated with the operations being performed by the assets 110").
Gallala, Kuts, and Deutsch are analogous art because they are from the same field of endeavor of using digital twins to monitor and/or control equipment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having teachings of Gallala, Kuts, and Deutsch before him/her, to modify the teachings of a Digital Twin for Human–Robot Interactions of Gallala modified to include the remote control of Kuts, to include the generation of alerts and suggestions on how to overcome the issue of Deutsch because adding the Contextual digital twin runtime environment of Deutsch would allow for improving operational performance and additional context to trigger actions as described in Deutsch [0042-0043] "While progress with industrial and machine automation has been made over the last several decades, and assets have become ‘smarter,’ the intelligence of any individual asset pales in comparison to intelligence that can be gained when multiple smart devices are connected together, for example, in the cloud. Aggregating data collected from or about multiple assets can enable users to improve business processes, for example by improving effectiveness of asset maintenance or improving operational performance if appropriate industrial-specific data collection and modeling technology is developed and applied. The integration of machine and equipment assets with the remote computing resources to enable the IIoT often presents technical challenges separate and distinct from the specific industry and from computer networks, generally. To address these problems and other problems resulting from the intersection of certain industrial fields and the IIoT, the example embodiments provide a contextual digital twin that is capable of providing context in addition to a virtual representation of an asset. The context can be used to trigger actions, insight, and events based on knowledge that is captured and/or reasoned from the operation of an asset or a group of assets.”
Claim 9:
The limitations of claim 9 are substantially the same as claim 2 and it is rejected for the same reasons.
Claim 16:
The limitations of claim 16 are substantially the same as claim 2 and it is rejected for the same reasons.
Claims 3-4, 10-11, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gallala, A.; Kumar, A.A.; Hichri, B.; Plapper, P. Digital Twin for Human–Robot Interactions by Means of Industry 4.0 Enabling Technologies. Sensors 2022, 22, 4950. https://doi.org/10.3390/s22134950 (hereinafter referred to as “Gallala”), in view of Kuts, V., Modoni, G.E., Terkaj, W., Tähemaa, T., Sacco, M., Otto, T. (2017). Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell (hereinafter referred to as “Kuts”), further in view of Malik, Ali Ahmad, and Alexander Brem. "Man, machine and work in a digital twin setup: a case study." Robotics and Computer-Integrated Manufacturing, 68, 2021, 102092, ISSN 0736-5845 arXiv preprint arXiv:2006.08760 (2020). (hereinafter referred to as “Malik”).
Claim 3:
Gallala in view of Kuts teaches “The method of claim 1, further comprising: generating a virtual working environment for one or more remote workers,” (Gallala teaches a human robot interaction method which includes creating a merged real and virtual world including a digital twin of the robot in Gallala [Page 6 Section 4 - Page 7] "The proposed DT-HRI method aims to improve upon the current human–robot interactions, robot simulation and robot programming method due to the use of emerging technologies that can be used to fit into smart factory exigencies and demands. For this use case, the system was composed of two pieces of physical equipment: a collaborative robot and an MR head-mounted device (MR-HMD). The physical world consisted of human beings, one or more collaborative robots, an MR-HMD for each user and, when needed, application-related objects (e.g., pieces to assemble or objects to pick and place). The digital world, on the other hand, contained the digital twin model of the robot, additional virtual objects and the user interactive interface (UII). The digital world could recognize and consider human gestures, voices and movements. A third-party engine was responsible for the data collection and processing, along with the generation of the algorithm.The HRI-DT framework is illustrated in Figure 2. After connecting the different systems, data were collected from the equipment, sensors and devices, as described in Table 2. After that, a mixed world that recognized the physical robot, related objects and human gestures was created by merging real and virtual worlds, which projected the digital twin, the immersive user interactive interface and a selection of virtual objects.").
Neither Gallala or Kuts appear to explicitly teach “wherein each of the one or more remote workers are assigned a designated workspace within the virtual working environment.” However, Malik does teach this claim limitation (Malik teaches optimizing a layout of a robot i.e. controlled by a remote worker in order to avoid collisions with a human in Malik [Page 19] "After an optimal design for robot and human tasks is achieved, moving towards system integration, the robot program was tested in the real robot. The robot program was generated from the DT environment and was downloaded into the real robot. A live connection was developed between the simulation robot and the real robot. The physical robot followed the defined movements in the robot program but was placed in an empty space however, the virtual robot environment was equipped with hardware and virtual human. The collisions identified points where an optimization in layout was needed (Figure 13).").
Gallala, Kuts, and Malik are analogous art because they are from the same field of endeavor of using digital twins to monitor and/or control equipment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having teachings of Gallala, Kuts, and Deutsch before him/her, to modify the teachings of a Digital Twin for Human–Robot Interactions of Gallala modified to include the remote control of Kuts, to include the optimization of a layout in order to avoid collisions with a human of Malik because adding the Digital twin setup including man, machine, and work of Deutsch would allow for layout optimization which makes it easier to generate adaptive human-robot trajectories as described in Malik [Page 22] "The following table summarizes the key learnings from this HRC in practice:
PNG
media_image2.png
125
740
media_image2.png
Greyscale
”
Claim 4:
Gallala in view of Kuts, further in view of Malik teaches “The method of claim 3, further comprising: integrating one or more physical workers into the virtual working environment.” (Malik teaches optimizing a layout of a robot in order to avoid collisions with a human i.e. the physical worker is integrated in the virtual working environment in Malik [Page 19] "After an optimal design for robot and human tasks is achieved, moving towards system integration, the robot program was tested in the real robot. The robot program was generated from the DT environment and was downloaded into the real robot. A live connection was developed between the simulation robot and the real robot. The physical robot followed the defined movements in the robot program but was placed in an empty space however, the virtual robot environment was equipped with hardware and virtual human. The collisions identified points where an optimization in layout was needed (Figure 13).").
Claim 10:
The limitations of claim 10 are substantially the same as claim 3 and it is rejected for the same reasons.
Claim 11:
The limitations of claim 11 are substantially the same as claim 4 and it is rejected for the same reasons.
Claim 17:
The limitations of claim 17 are substantially the same as claim 3 and it is rejected for the same reasons.
Claim 18:
The limitations of claim 18 are substantially the same as claim 4 and it is rejected for the same reasons.
Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gallala, A.; Kumar, A.A.; Hichri, B.; Plapper, P. Digital Twin for Human–Robot Interactions by Means of Industry 4.0 Enabling Technologies. Sensors 2022, 22, 4950. https://doi.org/10.3390/s22134950 (hereinafter referred to as “Gallala”), in view of Kuts, V., Modoni, G.E., Terkaj, W., Tähemaa, T., Sacco, M., Otto, T. (2017). Exploiting Factory Telemetry to Support Virtual Reality Simulation in Robotics Cell (hereinafter referred to as “Kuts”), further in view of Berti et al. (US20210158242A1).
Claim 6:
Gallala in view of Kuts teaches “The method of claim 1,” as described above. Neither Gallala or Kuts appear to explicitly teach “wherein identifying one or more correlated activities utilizes one or more linguistic analysis techniques in analyzing the data received for the physical ecosystem.” However, Berti does teach this claim limitation (Berti teaches using a linguistic analysis technique to identify relevant data based on keywords i.e. it identifies correlated activities in Berti [0037-0038] "Then, at 208, the relevant data is identified and extracted. A parsing engine may be utilized by the digital twin consultation program 110 a, 110 b to search through the work order or document associated with the type of task and/or assignment to be performed by the technician. The parsing engine may also search (i.e., simultaneously or consecutively while searching the work order or associated document) through the digital twin resources to identify relevant data based on key words (e.g., type of the task and/or assignment) to identify features (e.g., relevant data) associated with the physical asset by comparing the work order and/or document associated with the task and/or assignment to be performed by the technician with the files within the digital twin resources. Relevant data may include any files, information or data associated with the task and/or assignment of the physical asset (e.g., reason for the technician to visit or perform the task and/or assignment on the physical asset) in which the parsing engine utilized by the digital twin consultation program 110 a, 110 b.Once relevant data associated with the physical asset is identified, the parsing engine may use a machine learning (ML) model to extract the context and information collected from the relevant data by utilizing natural language processing (NLP) techniques for textual data and visual recognition techniques for image data. More specifically, for NLP, an external engine may utilize an NLP technique (e.g., structure extraction, language identification, tokenization, decompounding, lemmatization/stemming, acronym normalization and tagging, entity extraction, phrase extraction) to process the collected textual data. Then, individual words, phrases, and/or sentences, as well as the relationships between the individual words, phrases and/or sentences, may be extracted from the processed textual data by utilizing various extraction approaches (e.g., top down, bottoms up, statistical). As a result, the crawl component may interpret the context and meaning for the words, phrases and/or sentences collected by the textual data.").
Gallala, Kuts, and Berti are analogous art because they are from the same field of endeavor of using digital twins to monitor and/or control equipment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having teachings of Gallala, Kuts, and Berti before him/her, to modify the teachings of a Digital Twin for Human–Robot Interactions of Gallala modified to include the remote control of Kuts, to include the Digital twin article recommendation consultation of Berti because adding the natural language processing technique of Berti would improve functionality of the computer and increase reliability of equipment and equipment effectiveness as described in Berti [0053] " The digital twin consultation program 110 a, 110 b may improve the functionality of the computer, the technology and/or the field of technology by utilizing a digital twin associated with a physical asset to consult the scheduled technician and to identify the appropriate equipment and tools to perform the task and/or assignment (i.e., job) on the physical asset. Additionally, the digital twin consultation program 110 a, 110 b may increase the reliability of equipment and production lines, improve overall equipment effectiveness (OEE) through reduced downtime and improved performance and productivity.”
Claim 13:
The limitations of claim 13 are substantially the same as claim 6 and it is rejected for the same reasons.
Claim 20:
The limitations of claim 20 are substantially the same as claim 6 and it is rejected for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Fan et al. (CN110751734A) teaches a mixed reality factory work environment where remote workers may provide recommendations to in person workers through an assistant on site in Fan [0042] "The remote VR assistance system 300 is used to obtain big data visualization information of the work site in a remote location through immersive virtual reality. It can vividly obtain on-site data such as the layout information of the equipment, the location and observation content of the staff, and the analysis data of the on-site CPS system. At the same time, it is also used to remotely control a humanoid assistant to guide the on-site staff in handling technical problems." and in Fan [0044] "The remote VR assistance system 300 includes a remote VR terminal and the Internet 301. Remote technicians can interact with the work site through the remote VR terminal. The remote VR terminal can be a PC computer 302, VR glasses 303, tablet computer, or other terminal devices with display functions."
Geng, R., Li, M., Hu, Z. et al. Digital Twin in smart manufacturing: remote control and virtual machining using VR and AR technologies. Struct Multidisc Optim 65, 321 (2022). https://doi.org/10.1007/s00158-022-03426-3 teaches a VR remote based control module where a technician con use buttons on a virtual panel to indirectly control on-site real manufacturing units in Geng [Section 3.3] "The VR-based remote control module provides a more immersive two-way HCI control strategy. On one hand, the VR technology visualizes the manufacturing environment (including manufacturing units) to provide an immersive experience for off-site technicians. On the other hand, by designing a control panel in the VR interface, the technicians can touch the buttons on the virtual panel to indirectly control the on-site real manufacturing units."
Péter Galambos, Ádám Csapó, Péter Zentay, István Marcell Fülöp, Tamás Haidegger, Péter Baranyi, Imre J. Rudas, Design, programming and orchestration of heterogeneous manufacturing systems through VR-powered remote collaboration, Robotics and Computer-Integrated Manufacturing, Volume 33, 2015, Pages 68-77, ISSN 0736-5845, https://doi.org/10.1016/j.rcim.2014.08.012. Teaches using VR in order to collaborate remotely in a manufacturing system in Galambos [Page 74, Section 5.1] "As a proof of the VirCA concept, we have implemented a life-like scenario, where a real industrial robot is delegated to a remote collaboration session.20 The background story is as follows: a robot dealer company operates a remote test laboratory, where potential customers can try the robots in operation by embedding them into VirCA-based semi-virtual scenarios. Engineers of the manufacturing company interested in buying the robot can remotely access the test lab, and place the robot into a virtual model. This merged physical/virtual cell can then be operated together as a whole, and the development of the robot program, or the layout design can be performed while having direct supervision on the robot.Through the VirCA sessions, robot experts, system integrators, end-users and all possible stakeholders can participate in hands-on technical consultation and training. The aforementioned demonstrative project was realized with the participation of the Antal Bejczy Center for Intelligent Robotics21 (playing the role of the robot dealer company) and the Institute for Computer Science and Control22 (playing the role of the system integrator)."; Galambos teaches integrating physical workers in the virtual environment in Galambos [Page 75, top left] "Fig. 6 shows a scenario from the VirCA session. One of the participants is working in an immersive VR room at MTA SZTAKI. The participant can control and visually follow the operation of the merged system while interacting with other remote collaborators. The red head floating on the left side symbolizes the operator sitting next to the real robot at Óbuda University.";
[AltContent: rect]
PNG
media_image3.png
483
734
media_image3.png
Greyscale
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Zachary A Cain whose telephone number is (571)272-4503. The examiner can normally be reached Mon-Fri 7:00-3:30 CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamini Shah can be reached at (571) 272-2279. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Z.A.C./ Examiner, Art Unit 2116
/KAMINI S SHAH/ Supervisory Patent Examiner, Art Unit 2116