Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is the first Office action on the merits. Claims 1-5 are currently pending and addressed below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/31/2024 has been received. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 20190262992 A1), hereinafter Kim in view of Abrams et al. (US 12508715 B1), hereinafter Abrams.
Regarding claim 1, Kim teaches:
1. A mobile object control system configured to control traveling of a mobile object based on operation amount information (Paragraph 0017, "According to an aspect of the present disclosure, there is provided a method of controlling a mobile robot. The method being performed by a control apparatus comprises acquiring a first control value for the mobile robot, which is input through a remote control apparatus, acquiring a second control value for the mobile robot, which is generated by an autonomous driving module, determining a weight for each control value based on a delay between the mobile robot and the remote control apparatus and generating a target control value of the mobile robot in combination of the first control value and the second control value based on the determined weights, wherein a first weight for the first control value and a second weight for the second control value are inversely proportional to each other.") of a plurality of operation terminals operated by a plurality of operators, respectively, (Paragraph 0069, " In detail, as shown in FIG. 6, the server 50 may receive driving data of a specific user from the plurality of remote control systems 20a to 20n, learn the driving data, and build a machine learning model associated with a driving pattern of each user. Also, the server 50 may predict and provide a control value (hereinafter referred to as a “pattern control value”) for the current driving environment of the mobile robot 30 through the built machine learning model. The pattern control value may also be used by the control apparatus 100 to generate the target control value, and a detailed description thereof will be described below with reference to FIG. 12. Also, an example method in which the server 50 learns the driving pattern will be described below with reference to FIG. 12.") the mobile object control system comprising one or more processors, (Paragraph 0090, "Each step of the control method for the mobile robot according to some embodiments of the present disclosure, which will be described below, may be performed by a computing apparatus. That is, each step of the control method may be implemented with one or more instructions executed by a processor of the computing apparatus. All the steps of the control method may be executed by a single physical computing apparatus. However, first steps of the method may be performed by a first computing apparatus, and second steps of the method may be performed by a second computing apparatus. The following description assumes that each step of the control method is performed by the control apparatus 100. However, for convenience, the description of the operating subject of each step included in the control method may be omitted.") wherein the operation amount information includes a first operation amount and a second operation amount as operation amounts (Paragraph 0017, "According to an aspect of the present disclosure, there is provided a method of controlling a mobile robot. The method being performed by a control apparatus comprises acquiring a first control value for the mobile robot, which is input through a remote control apparatus, acquiring a second control value for the mobile robot, which is generated by an autonomous driving module, determining a weight for each control value based on a delay between the mobile robot and the remote control apparatus and generating a target control value of the mobile robot in combination of the first control value and the second control value based on the determined weights, wherein a first weight for the first control value and a second weight for the second control value are inversely proportional to each other.") … and
the one or more processors are configured to:
calculate, for the individual operation terminals, target speeds that are target values of a speed in a traveling direction of the mobile object based on the first operation amount;
calculate, for the individual operation terminals, target turning angular velocities that are target values of a turning angular velocity of the mobile object based on the second operation amount; (Paragraph 0075, "The control value acquisition module 110 acquires various kinds of control values serving as the basis of the target control value. In detail, the control value acquisition module 110 may acquire a remote control value input by a user from the remote control system 20 and may acquire an autonomous control value generated by the autonomous driving module 130. Each of the remote control value and the autonomous control value may include control values for a speed, a steering angle, and the like of the mobile robot 30.")
calculate a final target speed by combining the target speeds calculated for the individual operation terminals at a first ratio;
calculate a final target turning angular velocity by combining the target turning angular velocities calculated for the individual operation terminals at a second ratio different from the first ratio; (Paragraphs 0081-0086, "Subsequently, the weight determination module 150 determines a weight for each control value and provides the determined weight to the target control value determination module 160. In this case, the weight may be understood as a value indicating the proportion in which each control value is reflected in the target control value.
In some embodiments, the weight determination module 150 may determine a weight for each control value based on a communication delay provided by the delay determination module 120. A detailed description of this embodiment will be described in detail with reference to FIGS. 10 and 11.
In some embodiments, the weight determination module 150 may determine a weight for each control value in further consideration of the risk of collision with an object near the mobile robot 30, the risk of collision being provided by the information acquisition module 140. A detailed description of this embodiment will be described below with reference to FIG. 12.
In some embodiments, the weight determination module 150 may determine a weight for each control value in further consideration of the level of complexity of surrounding environments of a mobile robot 30, the level of complexity being provided by the information acquisition module 140. A detailed description of this embodiment will be described below with reference to FIG. 12.
Subsequently, the target control value determination module 160 determines a target control value based on a plurality of control values provided by the control value acquisition module 110 and a weight for each control value provided by the weight determination module 150.
For example, the target control value determination module 160 may determine the target control value based on the weighted average of the plurality of control values. However, this is merely illustrative of some embodiments of the present disclosure, and the technical scope of the present disclosure is not limited thereto.") and
… related to the traveling of the mobile object based on the final target speed and the final target turning angular velocity. (Paragraph 0063, "Since the target control value is a value that indicates a target control state (e.g., a steering angle and a speed) of the mobile robot 30, a final control value input to the mobile robot 30 may be different from the target control value. For example, the final control value may be generated by various control algorithms based on the target control value and a current control state (e.g., a current speed and a current steering angle) of the mobile robot 30. However, a detailed description thereof will be omitted in order not to obscure the subject matters of the present disclosure.")
Kim does not specifically discuss the two control inputs being both from operators or the movement being controlled by actuators. However, Abrams, in the same field of endeavor of autonomous control, teaches:
… of each of the operation terminals, (Column 19, Line 62-Column 20, Line 10, "Teleoperation (e.g., remote control) may include operation of an individual robot or one or more (e.g., a plurality of) robots at a distance. Teleoperation may be implemented over a network. Teleoperation may include wireless communication and data transfer mechanisms (e.g., using radio, ultrasonic, or infrared systems, other media such as a telephone or computer network, optical link or other wired communications like phase line carriers, or GSM networks, including using SMS to receive and transmit data). In some embodiments, a robot operator may teleoperate more than one robot simultaneously. In some embodiments, a plurality of operators may work in conjunction to teleoperate a single robot. In some embodiments, the robot may comprise multiple wireless networking devices for redundancy, such as, for example, a Wi-Fi connection and a cellular connection." as well as Column 35, Lines 4-52, "One or more (e.g., a plurality of) third party experts (also “experts” herein) may assist one or more (e.g., a plurality of) robots and one or more (e.g., a plurality of) robot users to perform jobs (e.g., with the aid of the system in FIG. 6). One or more (e.g., a plurality of) third party experts may assist one or more (e.g., a plurality of) robots, one or more (e.g., a plurality of) robot users and/or one or more (e.g., a plurality of) robot operators to perform jobs (e.g., with the aid of the system in FIG. 6). In some embodiments, the system may automatically identify an appropriate expert for a specific job and assign the job to the expert automatically without the robot user needing to be aware that a third party expert was assigned. In some embodiments, the user may choose an expert, or a robot operator may choose an expert. In some embodiments, the robot user may be automatically billed at a premium rate. The billing (e.g., the rate) may depend on the job being performed, the rate of the individual third party expert, or a combination thereof. In some embodiments, the robot user can sign up for a premium plan with a given (e.g., certain) quantity of expert activity included (e.g., based on time, jobs, cost or other factors). In some embodiments, the system may require approval or authorization from the robot user before a third party expert is engaged at a premium rate. In some embodiments, the third party expert may work in conjunction with a non-expert robot operator who may control the robot. The expert may be able to see the activity of the robot by watching a feed. Such monitoring may be as described elsewhere herein (e.g., in relation to monitoring by supervisor or operators). By working in conjunction with another operator, the expert may not have to learn to control the robot, but may simply instruct the operator. In some embodiments, the expert may be able to talk to the robot user through text-to-speech while simultaneously talking to the robot operator using audio. In some embodiments, the expert may talk directly to the robot user with audio, or send text to the robot operator (e.g., a remote human operator). In some embodiments, a video feed of the expert may appear on a video screen on the robot (e.g., in which case the robot can provide video conference functionality). In some embodiments, the system may comprise an expert marketplace from which a robot user or robot operator (e.g., a remote human operator) may choose experts to assist in jobs. The marketplace may include information about the expert, such as, for example, their experience, scores/ratings, skills, cost/rates, reviews, etc. The scores and ratings may be as described elsewhere herein (e.g., in relation to operators). In some embodiments, experts may be employees of the system provider, and may optionally be trained as robot operators.") … control one or more actuators (Column 2, Line 53-Column 3, Line 23, "A system may comprise one or more (e.g., a plurality of) robots. A robot may comprise one or more robotic arms (for example one, two, three, or four arms), in which robot arms may be mounted in a fixed location, or in which robot arms may be mounted on a mobile base. The mobile base may have legs or wheels, including 2 legs or wheels, 4 legs or wheels, or any number of legs or wheels, optionally including wheels mounted on legs. In some embodiments, a mobile base may use spherical wheels, omni-wheels, or mecanum wheels. In some embodiments, a robot with one or more arms mounted on a mobile base may be a flying robot (e.g., drone, which may have one or more fixed wings, or one or more rotating wings or propellers, e.g., a helicopter, quadcopters, six or eight propeller drone, etc.) In some embodiments, a mobile base may be intended to float in (and move through) water, either on the surface or submerged. In some embodiments a mobile robot will have no arms. In some embodiments, the moving components of the robot may be controlled by electric actuators; in other embodiments, pneumatic, hydraulic, thermoelectric, piezoelectric, ultrasonic or other actuators may be used. For example, actuators may include electric motors, linear actuators (e.g., pneumatic or hydraulic actuators), series elastic actuators (e.g., comprising a spring), air muscles, muscle wire (e.g., comprising shape memory alloy), electroactive polymers, piezo motors or ultrasonic motors, elastic nanotubes (e.g., carbon nanotubes), or any combination thereof. The actuators may be used to control the robot. The mechanical structure of a robot may be controlled to perform jobs. The control of a robot may involve perception, processing (e.g., translating raw sensor information directly into actuator commands, first using sensor fusion to estimate parameters of interest, such as, for example, the position of the robot's gripper from noisy sensor data and inferring an immediate job, such as, for example, moving the gripper in a certain direction from these estimates, applying techniques from control theory to convert the job into commands that drive the actuators) and action.") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine system and operating methods as taught by Kim with the ability to have multiple operators controlling the same robot and to have the motion controlled via actuators as taught by Abrams. While Kim does not teach the combination of two operators directly influencing the operation of the robot. They do discuss a plurality of operators providing input to a plurality of interfaces wherein the input is used to train a model for autonomous control. It would be obvious to modify this to incorporate the functionality taught by Abrams of a plurality of users controlling a single robot in order to provide a method where two operators are both able to provide input at different weights in order to allow a first operator to intervene when teaching a second operator.
Regarding claim 2, where all the limitations of claim 1 are discussed above, Kim further teaches:
2. The mobile object control system according to claim 1, wherein:
the operation terminals include a first operation terminal and a second operation terminal; (Paragraph 0069, " In detail, as shown in FIG. 6, the server 50 may receive driving data of a specific user from the plurality of remote control systems 20a to 20n, learn the driving data, and build a machine learning model associated with a driving pattern of each user. Also, the server 50 may predict and provide a control value (hereinafter referred to as a “pattern control value”) for the current driving environment of the mobile robot 30 through the built machine learning model. The pattern control value may also be used by the control apparatus 100 to generate the target control value, and a detailed description thereof will be described below with reference to FIG. 12. Also, an example method in which the server 50 learns the driving pattern will be described below with reference to FIG. 12.")
a coefficient of the first ratio for multiplication of the target speed based on the first operation amount of the first operation terminal is larger than a coefficient of the first ratio for multiplication of the target speed based on the first operation amount of the second operation terminal; and
a coefficient of the second ratio for multiplication of the target turning angular velocity based on the second operation amount of the first operation terminal is equal to a coefficient of the second ratio for multiplication of the target turning angular velocity based on the second operation amount of the second operation terminal. (Paragraph 0081, " Subsequently, the weight determination module 150 determines a weight for each control value and provides the determined weight to the target control value determination module 160. In this case, the weight may be understood as a value indicating the proportion in which each control value is reflected in the target control value." Please also see Figure 9)
Regarding claim 3, where all the limitations of claim 2 are discussed above, Kim further teaches:
3. The mobile object control system according to claim 2, wherein:
the coefficient of the first ratio for multiplication of the target speed based on the first operation amount of the first operation terminal is 1; and
the coefficient of the first ratio for multiplication of the target speed based on the first operation amount of the second operation terminal is 0. (Paragraph 0081, " Subsequently, the weight determination module 150 determines a weight for each control value and provides the determined weight to the target control value determination module 160. In this case, the weight may be understood as a value indicating the proportion in which each control value is reflected in the target control value." Please also see Figure 9)
Regarding claim 4, Kim further teaches:
4. A mobile object control system configured to control traveling of a mobile object based on operation amount information (Paragraph 0017, "According to an aspect of the present disclosure, there is provided a method of controlling a mobile robot. The method being performed by a control apparatus comprises acquiring a first control value for the mobile robot, which is input through a remote control apparatus, acquiring a second control value for the mobile robot, which is generated by an autonomous driving module, determining a weight for each control value based on a delay between the mobile robot and the remote control apparatus and generating a target control value of the mobile robot in combination of the first control value and the second control value based on the determined weights, wherein a first weight for the first control value and a second weight for the second control value are inversely proportional to each other.") of a plurality of operation terminals operated by a plurality of operators, respectively, (Paragraph 0069, " In detail, as shown in FIG. 6, the server 50 may receive driving data of a specific user from the plurality of remote control systems 20a to 20n, learn the driving data, and build a machine learning model associated with a driving pattern of each user. Also, the server 50 may predict and provide a control value (hereinafter referred to as a “pattern control value”) for the current driving environment of the mobile robot 30 through the built machine learning model. The pattern control value may also be used by the control apparatus 100 to generate the target control value, and a detailed description thereof will be described below with reference to FIG. 12. Also, an example method in which the server 50 learns the driving pattern will be described below with reference to FIG. 12.") the mobile object control system comprising one or more processors, (Paragraph 0090, "Each step of the control method for the mobile robot according to some embodiments of the present disclosure, which will be described below, may be performed by a computing apparatus. That is, each step of the control method may be implemented with one or more instructions executed by a processor of the computing apparatus. All the steps of the control method may be executed by a single physical computing apparatus. However, first steps of the method may be performed by a first computing apparatus, and second steps of the method may be performed by a second computing apparatus. The following description assumes that each step of the control method is performed by the control apparatus 100. However, for convenience, the description of the operating subject of each step included in the control method may be omitted.") wherein the operation amount information includes a first operation amount and a second operation amount as operation amounts (Paragraph 0017, "According to an aspect of the present disclosure, there is provided a method of controlling a mobile robot. The method being performed by a control apparatus comprises acquiring a first control value for the mobile robot, which is input through a remote control apparatus, acquiring a second control value for the mobile robot, which is generated by an autonomous driving module, determining a weight for each control value based on a delay between the mobile robot and the remote control apparatus and generating a target control value of the mobile robot in combination of the first control value and the second control value based on the determined weights, wherein a first weight for the first control value and a second weight for the second control value are inversely proportional to each other.") … , and
the one or more processors are configured to:
calculate a first combined operation amount by combining the first operation amounts of the individual operation terminals at a first ratio;
calculate a second combined operation amount by combining the second operation amounts of the individual operation terminals at a second ratio different from the first ratio; (Paragraphs 0081-0086, "Subsequently, the weight determination module 150 determines a weight for each control value and provides the determined weight to the target control value determination module 160. In this case, the weight may be understood as a value indicating the proportion in which each control value is reflected in the target control value.
In some embodiments, the weight determination module 150 may determine a weight for each control value based on a communication delay provided by the delay determination module 120. A detailed description of this embodiment will be described in detail with reference to FIGS. 10 and 11.
In some embodiments, the weight determination module 150 may determine a weight for each control value in further consideration of the risk of collision with an object near the mobile robot 30, the risk of collision being provided by the information acquisition module 140. A detailed description of this embodiment will be described below with reference to FIG. 12.
In some embodiments, the weight determination module 150 may determine a weight for each control value in further consideration of the level of complexity of surrounding environments of a mobile robot 30, the level of complexity being provided by the information acquisition module 140. A detailed description of this embodiment will be described below with reference to FIG. 12.
Subsequently, the target control value determination module 160 determines a target control value based on a plurality of control values provided by the control value acquisition module 110 and a weight for each control value provided by the weight determination module 150.
For example, the target control value determination module 160 may determine the target control value based on the weighted average of the plurality of control values. However, this is merely illustrative of some embodiments of the present disclosure, and the technical scope of the present disclosure is not limited thereto.")
calculate a final target speed that is a target value of a speed in a traveling direction of the mobile object based on the first combined operation amount;
calculate a final target turning angular velocity that is a target value of a turning angular velocity of the mobile object based on the second combined operation amount; (Paragraphs 0092-0093, "Referring to FIG. 8, the control method for the mobile robot according to the first embodiment begins with step S100 in which the control apparatus 100 acquires a remote control value that is input through the remote control system 20. In this case, the remote control value may include control values for a speed, a steering angle, and the like of the mobile robot 30.
In step S200, the control apparatus 100 acquires an autonomous control value generated by the autonomous driving module. The autonomous control value may also include control values for a speed, a steering angle, and the like of the mobile robot 30." as well as Paragraph 0110, "The control method for the mobile robot according to the first embodiment of the present disclosure has been described with reference to FIGS. 8 to 12. According to the above-described method, the target control value is generated based on a combination of the control values. Thus, it is possible to alleviate a problem of the operation of the mobile robot being unstable due to iterative control mode changes and a problem of an operator's sense of difference in operation being maximized. Furthermore, by adjusting a weight for each control value, it is possible to minimize the risk of accident of the mobile robot even without the control mode switching.") and
… related to the traveling of the mobile object based on the final target speed and the final target turning angular velocity. (Paragraph 0063, "Since the target control value is a value that indicates a target control state (e.g., a steering angle and a speed) of the mobile robot 30, a final control value input to the mobile robot 30 may be different from the target control value. For example, the final control value may be generated by various control algorithms based on the target control value and a current control state (e.g., a current speed and a current steering angle) of the mobile robot 30. However, a detailed description thereof will be omitted in order not to obscure the subject matters of the present disclosure.")
Kim does not specifically discuss the two control inputs being both from operators or the movement being controlled by actuators. However, Abrams, in the same field of endeavor of autonomous control, teaches:
… of each of the operation terminals (Column 19, Line 62-Column 20, Line 10, "Teleoperation (e.g., remote control) may include operation of an individual robot or one or more (e.g., a plurality of) robots at a distance. Teleoperation may be implemented over a network. Teleoperation may include wireless communication and data transfer mechanisms (e.g., using radio, ultrasonic, or infrared systems, other media such as a telephone or computer network, optical link or other wired communications like phase line carriers, or GSM networks, including using SMS to receive and transmit data). In some embodiments, a robot operator may teleoperate more than one robot simultaneously. In some embodiments, a plurality of operators may work in conjunction to teleoperate a single robot. In some embodiments, the robot may comprise multiple wireless networking devices for redundancy, such as, for example, a Wi-Fi connection and a cellular connection." as well as Column 35, Lines 4-52, "One or more (e.g., a plurality of) third party experts (also “experts” herein) may assist one or more (e.g., a plurality of) robots and one or more (e.g., a plurality of) robot users to perform jobs (e.g., with the aid of the system in FIG. 6). One or more (e.g., a plurality of) third party experts may assist one or more (e.g., a plurality of) robots, one or more (e.g., a plurality of) robot users and/or one or more (e.g., a plurality of) robot operators to perform jobs (e.g., with the aid of the system in FIG. 6). In some embodiments, the system may automatically identify an appropriate expert for a specific job and assign the job to the expert automatically without the robot user needing to be aware that a third party expert was assigned. In some embodiments, the user may choose an expert, or a robot operator may choose an expert. In some embodiments, the robot user may be automatically billed at a premium rate. The billing (e.g., the rate) may depend on the job being performed, the rate of the individual third party expert, or a combination thereof. In some embodiments, the robot user can sign up for a premium plan with a given (e.g., certain) quantity of expert activity included (e.g., based on time, jobs, cost or other factors). In some embodiments, the system may require approval or authorization from the robot user before a third party expert is engaged at a premium rate. In some embodiments, the third party expert may work in conjunction with a non-expert robot operator who may control the robot. The expert may be able to see the activity of the robot by watching a feed. Such monitoring may be as described elsewhere herein (e.g., in relation to monitoring by supervisor or operators). By working in conjunction with another operator, the expert may not have to learn to control the robot, but may simply instruct the operator. In some embodiments, the expert may be able to talk to the robot user through text-to-speech while simultaneously talking to the robot operator using audio. In some embodiments, the expert may talk directly to the robot user with audio, or send text to the robot operator (e.g., a remote human operator). In some embodiments, a video feed of the expert may appear on a video screen on the robot (e.g., in which case the robot can provide video conference functionality). In some embodiments, the system may comprise an expert marketplace from which a robot user or robot operator (e.g., a remote human operator) may choose experts to assist in jobs. The marketplace may include information about the expert, such as, for example, their experience, scores/ratings, skills, cost/rates, reviews, etc. The scores and ratings may be as described elsewhere herein (e.g., in relation to operators). In some embodiments, experts may be employees of the system provider, and may optionally be trained as robot operators.") … control one or more actuators (Column 2, Line 53-Column 3, Line 23, "A system may comprise one or more (e.g., a plurality of) robots. A robot may comprise one or more robotic arms (for example one, two, three, or four arms), in which robot arms may be mounted in a fixed location, or in which robot arms may be mounted on a mobile base. The mobile base may have legs or wheels, including 2 legs or wheels, 4 legs or wheels, or any number of legs or wheels, optionally including wheels mounted on legs. In some embodiments, a mobile base may use spherical wheels, omni-wheels, or mecanum wheels. In some embodiments, a robot with one or more arms mounted on a mobile base may be a flying robot (e.g., drone, which may have one or more fixed wings, or one or more rotating wings or propellers, e.g., a helicopter, quadcopters, six or eight propeller drone, etc.) In some embodiments, a mobile base may be intended to float in (and move through) water, either on the surface or submerged. In some embodiments a mobile robot will have no arms. In some embodiments, the moving components of the robot may be controlled by electric actuators; in other embodiments, pneumatic, hydraulic, thermoelectric, piezoelectric, ultrasonic or other actuators may be used. For example, actuators may include electric motors, linear actuators (e.g., pneumatic or hydraulic actuators), series elastic actuators (e.g., comprising a spring), air muscles, muscle wire (e.g., comprising shape memory alloy), electroactive polymers, piezo motors or ultrasonic motors, elastic nanotubes (e.g., carbon nanotubes), or any combination thereof. The actuators may be used to control the robot. The mechanical structure of a robot may be controlled to perform jobs. The control of a robot may involve perception, processing (e.g., translating raw sensor information directly into actuator commands, first using sensor fusion to estimate parameters of interest, such as, for example, the position of the robot's gripper from noisy sensor data and inferring an immediate job, such as, for example, moving the gripper in a certain direction from these estimates, applying techniques from control theory to convert the job into commands that drive the actuators) and action.") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine system and operating methods as taught by Kim with the ability to have multiple operators controlling the same robot and to have the motion controlled via actuators as taught by Abrams. While Kim does not teach the combination of two operators directly influencing the operation of the robot. They do discuss a plurality of operators providing input to a plurality of interfaces wherein the input is used to train a model for autonomous control. It would be obvious to modify this to incorporate the functionality taught by Abrams of a plurality of users controlling a single robot in order to provide a method where two operators are both able to provide input at different weights in order to allow a first operator to intervene when teaching a second operator.
Regarding claim 5, where all the limitations of claim 4 are discussed above, Kim further teaches:
5. The mobile object control system according to claim 4, wherein:
the operation terminals include a first operation terminal and a second operation terminal; (Paragraph 0069, " In detail, as shown in FIG. 6, the server 50 may receive driving data of a specific user from the plurality of remote control systems 20a to 20n, learn the driving data, and build a machine learning model associated with a driving pattern of each user. Also, the server 50 may predict and provide a control value (hereinafter referred to as a “pattern control value”) for the current driving environment of the mobile robot 30 through the built machine learning model. The pattern control value may also be used by the control apparatus 100 to generate the target control value, and a detailed description thereof will be described below with reference to FIG. 12. Also, an example method in which the server 50 learns the driving pattern will be described below with reference to FIG. 12.")
a coefficient of the first ratio for multiplication of the first operation amount of the first operation terminal is larger than a coefficient of the first ratio for multiplication of the first operation amount of the second operation terminal; and
a coefficient of the second ratio for multiplication of the second operation amount of the first operation terminal is equal to a coefficient of the second ratio for multiplication of the second operation amount of the second operation terminal. (Paragraph 0081, " Subsequently, the weight determination module 150 determines a weight for each control value and provides the determined weight to the target control value determination module 160. In this case, the weight may be understood as a value indicating the proportion in which each control value is reflected in the target control value." Please also see Figure 9)
Conclusion
The Examiner has cited particular paragraphs or columns and line numbers in the referencesapplied to the claims above for the convenience of the Applicant. Although the specified citations arerepresentative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATHER KENIRY whose telephone number is (571)270-5468. The examiner can normally be reached M-F 7:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.J.K./Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657