Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This communication is responsive to Application No. 18/600,959 and the amendments filed on 12/30/2025.
3. Claims 1-5 and 7-8 are presented for examination.
Information Disclosure Statement
4. The information disclosure statement (IDS) submitted on 3/11/2024 has been fully considered by the Examiner.
Response to Arguments
5. Applicant’s arguments, see page 5, filed 12/30/2025, with respect to the interpretation of claims 1, 4, 5, and 6 under 35 U.S.C. 112(f) have been fully considered and are persuasive. The interpretation of 9/30/2025 has been withdrawn.
6. Applicant’s arguments with respect to the rejection of claim(s) 1-8 under 35 U.S.C. 102 and/or 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regarding independent claim 1, the Examiner agrees that US 20220212345 A1 to Smith fails to teach all of the amended limitations of the claim. Further, as discussed on pages 5-7 of the Applicant’s remarks filed 12/30/2025, the Examiner agrees that the combination of Smith and US 20210188554 A1 to Kalouche fails to teach all of the amended limitation of the claim. However, in light of the amendments and the Applicant’s remarks, an updated search was conducted, and a new ground of rejection concerning claim 1 has been determined, in which will be described later.
Regarding dependent claims 2-5 and 7-8, as all of these claims depend on claim 1, are still rejected, in which will be described later.
Regarding dependent claim 6, this claim has been cancelled, and thus, is withdrawn from further consideration.
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 20220212345 A1 hereinafter Smith) in view of Menon et al. (US 20190339693 A1 hereinafter Menon) and Skubch (US 20200016754 A1 hereinafter Skubch).
Regarding Claim 1, Smith teaches a robot control system comprising: a communicator configured to input and output information from and to one or more operation terminals remotely operating a robot of two or more robots ([0083] via “FIGS. 1 and 2 illustrate a robotic system (e.g., a tele-operated robotic system, an autonomous or semi-autonomous robotic system, and others) that can comprise a master robotic system 102 (see FIG. 2) (e.g., a robotic vehicle control system) and first and second robotic systems 104a and 104b (e.g., robotic vehicles), each of the first and second robotic systems 104a and 104b being independent, separate and distinct robotic systems that can be controlled and operated independent of the other.”), ([0098] via “More specifically, FIG. 5 is a block diagram illustrating a high-level control scheme that includes aspects of the master robotic system 102 … that includes one or more CPU(s) 210 for receiving input signals, processing information, executing instructions, and transmitting output signals for controlling the first and second robotic systems 104a
and 104b. For instance, the CPU(s) 210 can receive and process mobility input(s) 212 in response to user control of one or more drive input devices (e.g., see drive input devices 207,
209 shown in FIG. 2) to control operation of the selected first and/or second mobility mechanisms 110a and/or 110b for locomotion. Similarly, the CPU(s) 210 can receive and process a first master control manipulator input 214a, and/or a second master control manipulator input 214b, in response to user control of one or both master control manipulators 204a and/or 204b (e.g., control manipulator input devices) to control operating of the selected first and/or second manipulators 116a and/or 116b.”);
an acquirer configured to acquire operation input information which is input by the operation terminal ([0098] via “These “inputs” (i.e., 212, 214a, 214b) may be transmitted to the CPU(s) 210 as output signals provided by sensor(s) (e.g., force sensors, position sensors) of the master robotic system 102 that detect user movement via the upper body exoskeleton structure 205 and the drive input device(s), for instance. The CPU(s) 210 can receive and process such sensor outputs, and then generate command signals that are transmitted to the first robotic system 104a, the second robotic system 104h, the stabilizing robotic system 104c, or any combination of these, for operation thereof.”), ([0105] via “The first and second robotic systems 104a and 104b can further comprise transceiver radios 152a and 152b
coupled to the respective computers 150a and 150b for receiving and sending signals between each other, and between the master robotic system 102.”);
a task determiner configured to: set a task of the robot ([0098] via “These “inputs” (i.e., 212, 214a, 214b) may be transmitted to the CPU(s) 210 as output signals provided by sensor(s) (e.g., force sensors, position sensors) of the master robotic system 102 that detect user movement via the upper body exoskeleton structure 205 and the drive input device(s), for instance. The CPU(s) 210 can receive and process such sensor outputs, and then generate command signals that are transmitted to the first robotic system 104a, the second robotic system 104h, the stabilizing robotic system 104c, or any combination of these, for operation thereof.”), and
determine whether the task is at least one of: an automatic control process, or an input-requiring process requiring an input from an operator ([0099] via “Notably, the master robotic system 102 can comprise one or more switch input(s) 216 communicatively coupled to the CPU(s) 210 and operable by the user to selectively switch between modes for operating aspects of the first and second robotic systems 104a and 104b and the stabilizing robotic system 104c. … The switch input 216 can be selected to operate in an unpaired control mode
218a, whereby the user selects operational control over the first or second robotic systems
104a or 104b or the stabilizing robotic system 104c. Thus, the user can selectively operate the second master manipulator 204b of the master robotic system 102 to control the second manipulator 116b of the second robotic system 104b, and concurrently (or separately) the user can operate the drive input device (e.g., 207, 209) to control the second mobility mechanism 110a (e.g., the pair of tracks 112c and 112d), via the switch input(s) 216. Once the first robotic system 104a is in a desired location (and/or after performing a task), the user can then activate the switch input 216 to selectively operate the first robotic system 104a in a similar manner to position it in a desired location (and/or to perform a task).”);
a motion controller configured to: control a motion of the robot on a basis of automatic control data of the automatic control process and the operation input information input for the input-requiring process determined by the task determiner ([0100] via “After, or in the alternative to, separately operating the first and second robotic systems 104a and 104b and/or the stabilizing robotic system 104c (see e.g., FIG. 4) in the unpaired control mode 218a, the user can then activate the switch input 216 to switch from the unpaired control mode 218a to a paired control mode 218b to facilitate operating the first robotic system
104a with the second robotic system 104b, … in a coordinated or harmonized manner. … And, when in the paired control mode 218b, the user can concurrently operate the first and second master manipulators 204a and 204b to control respective first and second manipulators 116a
and 116b and/or the stabilizing robotic system 104c, and concurrently (or separately) the user can operate one or more drive input device(s) (e.g., 207, 209) to control the first and second mobility mechanisms 110a and 110b to control locomotion of the first and second robotic systems 104a and 104b, or the mobility mechanism 110c; of the stabilizing robotic system
104c, in a coordinated manner.”), ([0119] via “If the user desires the first distance D1 to be the minimum distance so that the mobile platforms 308a and 308b are near or adjacent each other (e.g., less than a foot apart, as illustrated in FIG. 1), the user can either individually move the mobile platforms 308a and 308b to the desired distance D1, or, alternatively, the user can activate an input device (e.g., button on the joystick) to cause the mobile platforms 308a and 308b to autonomously move adjacent each other to the desired first distance D1 (and spatial position/orientation) relative to each other, or other pre-programmed distance and orientation. Thus, the robot control switch module 222 is operable in an autonomous pairing mode that facilitates the first and second mobile platforms 310a
and 310b autonomously moving to a paired position/orientation (FIG. 10) relative to each other based on the position data generated by the position sensors 354a and 354b.”), ([0120] via “Note that the master control systems contemplated herein can operate aspect(s) of two or more robotic systems in an autonomous mode, a semi-autonomous mode, and/or a supervised autonomous mode for control of at least one function of at least one of the first or second robotic mobile platforms.”).
Smith is silent on wherein the task determiner selects the task requiring an input of the operator out of motions patterned in a series of tasks; allocate the task selected by the task determiner to the input-requiring process; wait for the input of the operator; control motions of the robot on a basis of the operation input information for the input-requiring process acquired by the acquirer; allocate the other tasks, which are not selected, to the automatic control process; and control motions of other robots of the two or more robots on the basis of the automatic control data.
However, Menon teaches wherein the task determiner selects the task requiring an input of the operator out of motions patterned in a series of tasks ([0100] via “In the example shown, upon receiving an assignment the robot determines its starting position, state, and context (1002). The robot determines a sequence of tasks to achieve the objective (i.e., complete the assignment) (1004), the sequence of tasks implying and/or otherwise having associated therewith a set of states from the starting state to a completion state in which the assignment has been completed. … For any future task/state for which the robot determines it will require human intervention (e.g., teleoperation, identification of an object, selection of an existing strategy, teaching the robot a new strategy, etc.) (1008), the robot to the extent possible requests and obtains human help in advance (1010), including by scheduling human teleoperation and a scheduled interrupt of its own autonomous operation to obtain human assistance before resuming autonomous operation.”);
allocate the task selected by the task determiner to the input-requiring process ([0065] via “In some embodiments, the robot is configured to anticipate and preemptively avoid and/or schedule human assistance to resolve situations where it might otherwise get stuck. … For example, in one approach, the robot will conclude it will need help with C and possibly B and schedules human assistance at the time it expects to need help, for example after it has had time to pick up A and make the configured number of attempts to pick up B.”);
wait for the input of the operator ([0065] via “For example, in one approach, the robot will conclude it will need help with C and possibly B and schedules human assistance at the time it expects to need help, for example after it has had time to pick up A and make the configured number of attempts to pick up B. If when the time scheduled for human help the robot has picked up A and been successful in picking up B, the human is prompted to help with C.”);
control motions of the robot on a basis of the operation input information for the input-requiring process acquired by the acquirer ([0093] via “If the robot determines it has no (further) strategy to perform the next (or any next) task or sub-task (608), the robot transitions to a human intervention state (610) in which on demand teleoperation is performed.”), ([0096] via “The teleoperation computer 862 is configured to control at any given time any one of the robots 802, 822, and 842, based on teleoperation inputs provided via a manual input device 864 by a human operator 866.”); and
allocate the other tasks, which are not selected, to the automatic control process ([0065] via “For example, in one approach, the robot will conclude it will need help with C and possibly B and schedules human assistance at the time it expects to need help, for example after it has had time to pick up A and make the configured number of attempts to pick up B. If when the time scheduled for human help the robot has picked up A and been successful in picking up B, the human is prompted to help with C.”), ([0100] via “The robot operates autonomously, as able, and obtains scheduled and/or on demand (e.g., for unanticipated events that cause the robot to become stuck) human assistance, as required, until the assignment is completed (1012).”).
Further, Skubch teaches to control motions of other robots of the two or more robots on the basis of the automatic control data ([0078] via “In case of decentralized controlled plan, the allocation of autonomous robots to the task is determined whenever autonomous robots enter a particular state. Plan execution engine executing at each of the autonomous robots locally execute the operations at 502-504 to determine task allocation based on the capability and the utility condition related to the autonomous robots. In one embodiment, each of the robot follows a subscribe-and-broadcast approach, where autonomous robots broadcast the determined role allocation and the determined environment condition that is subscribed by other autonomous robots.”), ([0080] via “In case of centralized controlled plan, the plan execution engine at the cloud node executes the operations at 502-504 to determine the task allocation based on robot capability and external condition information received from the autonomous robots. The cloud node then sends the determined task allocation to the different autonomous robots.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Menon wherein the task determiner selects the task requiring an input of the operator out of motions patterned in a series of tasks; allocate the task selected by the task determiner to the input-requiring process; wait for the input of the operator; control motions of the robot on a basis of the operation input information for the input-requiring process acquired by the acquirer; and allocate the other tasks, which are not selected, to the automatic control process. Doing so preemptively schedules human assistance when it is determined appropriate such that the robot optimizes its time management so that the human assistance arrives right when the robot requires it, as stated by Menon ([0065] via “In some embodiments, the robot is configured to anticipate and preemptively avoid and/or schedule human assistance to resolve situations where it might otherwise get stuck. For example, assume the robot is tasked to pick up three items A, B, and C, and determines it can pick up A, may be able to pick up B, and cannot pick up C. In various embodiments, the robot implements a plan that anticipates the uncertainty over its ability to pick up B and its anticipated inability to pick up C. For example, in one approach, the robot will conclude it will need help with C and possibly B and schedules human assistance at the time it expects to need help, for example after it has had time to pick up A and make the configured number of attempts to pick up B. If when the time scheduled for human help the robot has picked up A and been successful in picking up B, the human is prompted to help with C.”) and in paragraph [0100] of Menon.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Skubch to control motions of other robots of the two or more robots on the basis of the automatic control data. Doing so delegates tasks to the other robots of the two or more robots that are capable of actually performing the tasks, as stated above by Skubch in both citations.
10. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 20220212345 A1 hereinafter Smith) in view of Menon et al. (US 20190339693 A1 hereinafter Menon) and Skubch (US 20200016754 A1 hereinafter Skubch), and further in view of Hamasaki et al. (US 20250170714 A1 hereinafter Hamasaki).
Regarding Claim 2, modified reference Smith teaches the robot control system according to claim 1, but is silent on wherein the robot control system further comprises a proposer configured to propose an efficient arrangement of the input-requiring processes for the two or more robots.
However, Hamasaki teaches wherein the robot control system further comprises a proposer configured to propose an efficient arrangement of the input-requiring processes for the two or more robots ([0039] via “Of these, although the plan generation section 12 will be described in detail later, the plan generation section 12 includes the functions of a control parameter calculation unit 16 which calculates control parameters corresponding to each of a plurality of robots R, an execution time calculation unit 17 which determines an execution time of a task for each of the plurality of robots R, and an optimization unit 15 which generates processes for the plurality of robots, based on the control parameters and the execution time of each task.”), ([0082] via “The execution restriction is a restriction satisfied when the start time of a task to be executed next is earlier than the estimated completion time of a task currently being executed by the other robot R. For example, in the process 50 of FIG. 10, when the execution restriction of the next task move (P.sub.point1) is confirmed at a time t1 when the task grasp (bolt) of the robot R1 is completed, an estimated completion time t2 of the task drill (point1) being executed by the robot R2 is the same as the start time t2 of grasp (bolt) at that time, and hence the execution restriction is not satisfied.”), ([0083] via “In that case, the processing steps S11 and S12 are repeated until the execution restriction is satisfied. In the process 50, the robot R1 proceeds to the processing step S13 at the time t2.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Hamasaki wherein the robot control system further comprises a proposer configured to propose an efficient arrangement of the input-requiring processes for the two or more robots. Doing so finds an optimal arrangement of the timing of each of the robot’s tasks such that the tasks do not overlap with each other, as stated by Hamasaki ([0009] via “In the control system, when execution times of tasks affecting each other overlap, the control parameter calculation unit imposes an operation restriction on the operation of the other robot in order to reduce the effect on the task execution of one robot, to calculate the control parameters, and when the execution times of the tasks affecting each other overlap, the optimization unit calculates the time required for the other robot to execute the task based on the control parameters for the other robot, and generates processes for the plurality of robots based on the calculation results.”).
11. Claim(s) 3, 7, and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 20220212345 A1 hereinafter Smith) in view of Menon et al. (US 20190339693 A1 hereinafter Menon) and Skubch (US 20200016754 A1 hereinafter Skubch), and further in view of Johnson et al. (US 20210393339 A1 hereinafter Johnson).
Regarding Claim 3, modified reference Smith teaches the robot control system according to claim 1, but is silent on the robot control system further comprising a presenter configured to present position information of the robot which is operated by the operator for the input-requiring process.
However, Johnson teaches a presenter configured to present position information of the robot which is operated by the operator for the input-requiring process ([0092] via “Another variation of an application for a GUI is a stadium view application that provides a real-time view of the robotic system, patient table or bed, and/or staff in an operating room during a procedure. The stadium view application may, in some variations, receive real-time or near real-time information relating to a current position of the robotic aims, patient table, and/or staff and the like, generate a rendering (graphical representation) of the operating room environment based on the received information, and display the rendering to the user. ... The user may, for example, monitor status of the robotic system such as tool status, potential collisions, etc. and communicate to other members of the surgical team about such status and resolution of any issue.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Johnson wherein the robot control system further comprises a presenter configured to present position information of the robot which is operated by the operator for the input-requiring process. Doing so allows the remote operator to monitor the current status of the robot, as stated above by Johnson.
Regarding Claim 7, modified reference Smith teaches the robot control system according to claim 3, but is silent on wherein the presenter is further configured to: present information indicating that the robot is a target of at least one of: the automatic control process, or the input-requiring process, and present the position information, when the number of robots is two or more.
However, Johnson teaches to present information indicating that the robot is a target of at least one of: the automatic control process, or the input-requiring process, and present the position information, when the number of robots is two or more ([0095] via “The display of the 3D rendering 910 in the stadium view application may be modified based on status of the rendered objects. … As shown in FIG. 9C, the stadium view application may be configured to highlight at least one of the robotic arms (labeled “2” in FIG. 9C), such as in response to a user selection of the arm. For example, a user may select a particular robotic arm and in response, the stadium view application may display information regarding status of the selected arm. As shown in, for example, FIG. 9E, in response to a user selection of an arm, the stadium view application may also display and/or highlight information relating to the selected arm and its associated tool, …. As another example, a user may select a particular robotic arm such that it is highlighted in both the user's displayed GUI and in another displayed instance of the GUI (e.g., on a control tower display) to more easily communicate with other surgical staff regarding that robotic arm, thereby reducing confusion.”), (Note: See Figures 9C and 9E of Johnson as well.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Johnson wherein the presenter is further configured to: present information indicating that the robot is a target of at least one of: the automatic control process, or the input-requiring process, and present the position information, when the number of robots is two or more. Doing so communicates to the user and others the selected robot arm out of the plurality of robot arms so that the selected one is apparent, as stated above by Johnson.
Regarding Claim 8, modified reference Smith teaches the robot control system according to claim 3, but is silent on wherein the presenter is further configured to: present information indicating an arrangement of the two or more robots, which are operated by the operator, to a partial area of an image of the robots presented to the operator, and present the position information.
However, Johnson teaches to present information indicating an arrangement of the two or more robots, which are operated by the operator, to a partial area of an image of the robots presented to the operator, and present the position information ([0098] via “In some variations, the stadium view application may be configured to notify a user of a collision between robotic arms. … In response to receiving information indicating that a collision is impending or has occurred, the stadium view application may highlight one or more robotic arms involved in the collision. For example, as shown in FIG. 9D, one or more rendered robotic arms (labeled “3” and “4”) may be highlighted. … It should be understood that in other variations, the stadium view application may provide alerts or notifications for other kinds of status updates, such as surgical instrument errors, in a similar manner. For example, the rendered display of other portions of the robotic system, and/or other suitable portions of the operating room environment, may be highlighted to indicate other kinds of status changes or provide suitable updates.”), (Note: Also see Figure 9D of Johnson.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Johnson wherein the presenter is further configured to: present information indicating an arrangement of the two or more robots, which are operated by the operator, to a partial area of an image of the robots presented to the operator, and present the position information. Doing so visually notifies the operator that an error (such as a collision) between at least two of the robot arms occurs, as stated above by Johnson.
12. Claim(s) 4 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 20220212345 A1 hereinafter Smith) in view of Menon et al. (US 20190339693 A1 hereinafter Menon) and Skubch (US 20200016754 A1 hereinafter Skubch), and further in view of Natarajan et al. (US 20210229281 A1 hereinafter Natarajan) and Ichiwara et al. (WO 2022244385 A1 hereinafter Ichiwara).
Regarding Claim 4, modified reference Smith teaches the robot control system according to claim 1, but is silent on wherein the robot comprises a first robot on a sender side, wherein the task comprises a first task, and wherein the motion controller is further configured to: cause a second robot of the two or more robots on a receiver side to predict a next motion from a motion of the first robot, cause the second robot to perform a motion based on the next motion, and perform control such that: the first robot moves to a receiving position on the basis of the automatic control data, and the second robot performs a second task after waiting for the operator’s operation.
However, Natarajan teaches wherein the robot comprises a first robot on a sender side, wherein the task comprises a first task ([0042] via “Interaction primitives are basic robot actions involving collaboration between two or more robotic agents and examples include actions such as joint object lift, joint object hand-off etc. … Here a first robot 602 first learns to pick up a cuboidal object from a corner of the object rather than from a center of the object since this is more convenient while handing off the object to a second robot 604.”), (Note: The Examiner interprets the first robot 602 of Natarajan as the first robot on a sender side.), and wherein the motion controller is further configured to:
cause a second robot of the two or more robots on a receiver side to predict a next motion from a motion of the first robot ([0033] via “The basic interaction primitives 404 may be any actions that involve two or more robots jointly working together. Thus, the interaction primitives 404 may be a subset of action primitives 402 (e.g., actions that involve two or more robots), or may be separately generated or stored. The interaction primitives 404 may include joint object lift, joint push, joint insert, joint transport, joint hand-off (e.g., handing off an object from one robot to another), collision avoidance, or the like. … These individual interaction primitives 404 may be learned separately via Reinforcement Learning. A library of these action primitives 402 and interaction primitives 404, which are RL policies that are pre-learned using RL algorithms.”), ([0042] via “Interaction primitives are basic robot actions involving collaboration between two or more robotic agents and examples include actions such as joint object lift, joint object hand-off etc. … An example for learning an interaction primitive is shown in FIG. 6. The example includes learning joint object hand-off between two robotic arms. Here a first robot 602 first learns to pick up a cuboidal object from a corner of the object rather than from a center of the object since this is more convenient while handing off the object to a second robot 604. This robot action is called the Corner Pick policy (illustrated in block 606). Both robots involved in the handoff are then equipped with this Corner Pick policy. The Corner Pick policy is an action primitive, which may be selected from a library or learned as a modification to a pick action primitive on the fly (e.g., through reinforcement learning). The first robot 602 executes the corner pick policy and picks up the object from the corner and reaches its sub-goal, which is a point of handoff in 3D space. The second robot 604 then executes its own corner pick policy to retrieve the object from the first robot's grasp.”), (Note: The Examiner interprets the second robot 604 of Natarajan as the second robot.),
cause the second robot to perform a motion based on the next motion ([0042] via “The second robot 604 then executes its own corner pick policy to retrieve the object from the first robot's grasp.”), and
perform control such that: the first robot moves to a receiving position on the basis of the automatic control data ([0036] via “The systems and methods described herein provide a technique that makes learning of collaborative tasks in a multi-robot system tractable. The techniques may be applicable to collaborative autonomous systems in general as well such as drone teams, autonomous vehicles etc.”), ([0042] via “Interaction primitives are basic robot actions involving collaboration between two or more robotic agents and examples include actions such as joint object lift, joint object hand-off etc. … An example for learning an interaction primitive is shown in FIG. 6. The example includes learning joint object hand-off between two robotic arms. Here a first robot 602 first learns to pick up a cuboidal object from a corner of the object rather than from a center of the object since this is more convenient while handing off the object to a second robot 604. This robot action is called the Corner Pick policy (illustrated in block 606). … The first robot 602 executes the corner pick policy and picks up the object from the corner and reaches its sub-goal, which is a point of handoff in 3D space.”), ([0043] via “In an example, the first robot 602 may learn to complete its sub-goal without involvement of the second robot 604 (e.g., the sub-goal may include picking the object up, moving it to a convenient place for handoff, and making handoff possible or convenient (e.g., by leaving a portion of the object untouched, such as with the Corner Pick policy).”).
Further, Ichiwara teaches wherein the second robot performs a second task after waiting for the operator’s operation (Page 6 paragraphs 4-6 via “Next, the operator or supervisor confirms the camera image displayed on the screen of the image display device 40 and determines whether the work of the robot 10 has been completed (S2). … When the operator or supervisor determines that the work is completed, the robot operation device 50 is used to click the icon of the work completion button 74 . When the inference unit 34 of the image area selection device 30 detects that the work completion button 74 has been operated, it determines that the work has been completed. When the operator or supervisor determines that the work of the robot 10 is completed (YES in S2), the work of the robot 10 is finished after operating the work completion button 74 (S7). On the other hand, if the operator or supervisor determines that the work of the robot 10 has not been completed (NO in S2), the inference unit 34 acquires the camera image and the sensor information of the robot 10 and the environment to create a learning model. 33a (S3) to obtain an output (inference result) corresponding to the input.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Natarajan wherein the robot comprises a first robot on a sender side, wherein the task comprises a first task, and wherein the motion controller is further configured to: cause a second robot of the two or more robots on a receiver side to predict a next motion from a motion of the first robot, cause the second robot to perform a motion based on the next motion, and perform control such that: the first robot moves to a receiving position on the basis of the automatic control data. Doing so allows the multiple robots to communicate and interact with each other collaboratively without the need to individually program each robot to perform tasks from scratch, as stated by Natarajan ([0034] via “Using the action primitives 402 or the interaction primitives 404, the robots may be controlled (e.g., using a trained model) to perform a collaborative task without needing to program the robots from scratch. Robotic action-primitive & interaction-primitive policies may be pre-stored in a library, for example as APIs, which may be accessed by a robot for any robotic (e.g., assembly) task as outlined in a goal statement.”).
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ichiwara wherein the second robot performs a second task after waiting for the operator’s operation. Doing so ensures that the tasks assigned to the robot are completed before the robot ends the process or moves on to a next task, as stated by Ichiwara (Page 6 paragraph 8 via “After the processing of steps S5 and S6, the process returns to the judgment processing of step S2, and if it is not judged that the work is completed, the camera image and the sensor information of the robot 10 and the environment are input to the learning model 33a. In this way, the processing of steps S2 to S6 is repeated until it is determined that the work is completed, that is, from the start of the work to the end of the work.”).
Regarding Claim 5, modified reference Smith teaches the robot control system according to claim 1, wherein the robot comprises a first robot ([0083] via “FIGS. 1 and 2
illustrate a robotic system … that can comprise a master robotic system 102 (see FIG. 2) … and first and second robotic systems 104a and 104b …, each of the first and second robotic systems 104a and 104b being independent, separate and distinct robotic systems that can be controlled and operated independent of the other.”).
Smith is silent on wherein the motion controller: controls the first robot on the basis of the automatic control data such that the first robot holds an operation target object, and controls a second robot on the basis of the operation input information, such that the second robot performs a predetermined task on the operation target object.
However, Natarajan teaches wherein the motion controller: controls the first robot on the basis of the automatic control data such that the first robot holds an operation target object ([0036] via “The systems and methods described herein provide a technique that makes learning of collaborative tasks in a multi-robot system tractable. The techniques may be applicable to collaborative autonomous systems in general as well such as drone teams, autonomous vehicles etc.”), ([0042] via “Interaction primitives are basic robot actions involving collaboration between two or more robotic agents and examples include actions such as joint object lift, joint object hand-off etc. … An example for learning an interaction primitive is shown in FIG. 6. The example includes learning joint object hand-off between two robotic arms. Here a first robot 602 first learns to pick up a cuboidal object from a corner of the object rather than from a center of the object since this is more convenient while handing off the object to a second robot 604. This robot action is called the Corner Pick policy (illustrated in block 606). … The first robot 602 executes the corner pick policy and picks up the object from the corner and reaches its sub-goal, which is a point of handoff in 3D space.”).
Further, Ichiwara teaches wherein the motion controller: controls a second robot on the basis of the operation input information, such that the second robot performs a predetermined task on the operation target object (Page 2 <First Embodiment> paragraph 2 via “FIG. 1 is a schematic diagram showing a configuration example of a robot control system to which the present invention is applied. In the robot control system 100 shown in FIG. 1, the robot 10 is a device capable of handling objects and performing predetermined work such as assembling and transporting parts.”), (Page 6 paragraph 3 via “First, the robot control device 20 receives an operation command for the robot 10 input by the operator from the robot operation device 50, and generates a control command for each actuator for controlling the robot 10 based on the operation command. The motion command is, for example, the joint angle of the robot 10, the posture (position) and force (torque) of the end effector 11, and the like. The robot 10 receives a control command from the robot control device 20 and starts an operation (work) (S1).”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Natarajan wherein the motion controller: controls the first robot on the basis of the automatic control data such that the first robot holds an operation target object. Doing so controls the first robot to perform the appropriate task it has learned to perform, as stated above by Natarajan in paragraph [0042].
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ichiwara wherein the motion controller: controls a second robot on the basis of the operation input information, such that the second robot performs a predetermined task on the operation target object. Doing so controls the robot according to the operator input to perform a predetermined work, as stated above by Ichiwara on page 6 paragraph 3.
Examiner’s Note
13. The Examiner has cited particular paragraphs or columns and line numbers in the
references applied to the claims above for the convenience of the Applicant. Although the
specified citations are representative of the teachings of the art and are applied to specific
limitations within the individual claim, other passages and figures may apply as well. It is
respectfully requested of the Applicant in preparing responses, to fully consider the references
in their entirety as potentially teaching all or part of the claimed invention, as well as the
context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP
2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole,
including portions that would lead away from the claimed Invention. W.L. Gore & Associates,
Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851
(1984). See also MPEP §2123.
Conclusion
14. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
15. Any inquiry concerning this communication or earlier communications from the
examiner should be directed to BYRON X KASPER whose telephone number is (571)272-3895.
The examiner can normally be reached Monday - Friday 8 am - 5 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing
using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Adam Mott can be reached on (571) 270-5376. The fax phone number for the
organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be
obtained from Patent Center. Unpublished application information in Patent Center is available
to registered users. To file and manage patent submissions in Patent Center, visit:
https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for
more information about Patent Center and https://www.uspto.gov/patents/docx for
information about filing in DOCX format. For additional questions, contact the Electronic
Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO
Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BYRON XAVIER KASPER/Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657