DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This communication is a first office action, non-final rejection on the merits. Claims 1-20 as filed, are currently pending and have been considered below.
Specification
The disclosure is objected to because of the following informalities: unclear labeling.
Paragraph 0079 labels the “control-determining subsystem” both 120 and 130. Additionally, the “control-determining subsystem” and the “autonomous control subsystem” are both labeled 130. The examiner suggests labeling the “control-determining subsystem” ONLY as 120 and apply the label 130 ONLY to the “autonomous control subsystem” to maintain consistency with the drawings.
Appropriate correction is required.
Claim Objections
Claim 3 is objected to because of the following informalities: grammatical error. Claim 3 contains a succession of wherein clauses; suggest changing to “…autonomous control, and wherein artificial intelligence…”
Claim 4 is objected to because of the following informalities: grammatical error. Claim 4 contains a succession of wherein clauses; suggest changing to “…semi-autonomous control, and wherein a human operator …”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 14-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 14 recites the limitation "the hardware" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim 15 recites the limitation "the hardware" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim 16 recites the limitation "the exoskeleton" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites the limitation "the hardware" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim 18 recites the limitation "the hardware" in line 1. There is insufficient antecedent basis for this limitation in the claim.
The examiner believes claims 14-15 and 17-18 were intended to depend upon claim 13 which introduces the hardware and claim 16 was intended to depend upon claim 15 which introduces the exoskeleton. Claims 14-15 and 17-18 will be examined in light of prior art as if they depended upon claim 13, and claim 16 as if it depended upon claim 15.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 13, 17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima et al. (K. Kojima, T. Karasawa, T. Kozuki, E. Kuroiwa, S. Yukizaki, S. Iwaishi, T. Ishikawa, R. Koyama, S. Noda, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, M. Inaba. "Development of life-sized high-power humanoid robot JAXON for real-world use," 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea (South), 2015, pp. 838-843, doi: 10.1109/HUMANOIDS.2015.7363459. hereinafter “Kojima”) in view of Ishiguro et al. (Y. Ishiguro, K. Kojima, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, M. Inaba. "High Speed Whole Body Dynamic Motion Experiment with Real Time Master-Slave Humanoid Robot System," 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 2018, pp. 5835-5841, doi: 10.1109/ICRA.2018.8461207. hereinafter “Ishiguro 2018”), and further in view of Ishiguro et al. (Y. Ishiguro, T. Makabe, Y. Nagamatsu, Y. Kojio, K. Kojima, F. Sugai, Y. Kakiuchi, K. Okada, M. Inaba. "Bilateral Humanoid Teleoperation System Using Whole-Body Exoskeleton Cockpit TABLIS," in IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6419-6426, Oct. 2020, doi: 10.1109/LRA.2020.3013863. hereinafter “Ishiguro 2020”).
Regarding Claim 1, Kojima discloses in figure 1 and column 2 of page 839
A control system for controlling operation of a robot in an environment, comprising:
a robot, including:
a plurality of sensors configured to convert information from the environment and the robot into sensor data, wherein each sensor of the plurality of sensors generates a stream of raw sensor data having a data type, a size, and a frequency;
Kojima pertains to the development of the humanoid robot JAXON and details the plurality of sensors as “6-axes force sensors in each sole and hand, posture sensors (IMU) in waist link, and Multisense SL in head link”( column 2 of page 839). It is understood by those of ordinary skill in the art that such sensor data inherently has a data type, a size, and a frequency. Kojima further discloses
a plurality of actuators configured to cause movement of the robot;
in column 2 of page 849 “JAXON has MAXON EC-4pole 30 200W motors, harmonic drive
speed reducers,” and further in column 1 of page 840 “All of JAXON’s joints use a high-speed and high-torque actuation system.”
Kojima does not teach autonomous control or teleoperation. However, researchers further developed the JAXON robot and Ishiguro 2020 discloses in figure 3 and column 1 page 6423
an autonomous control subsystem, communicatively coupled to the robot, and configured to receive the sensor data from the plurality of sensors and output autonomous actuator data to the plurality of actuators; and
Ishiguro 2020 details the autonomous control subsystem “This subsystem monitors pref ZMP, pact ZMP, and ´pref DCM. Pref MP is a reference ZMP derived from the reference COM state pref G , ¨pref G . pact ZMP is an actual ZMP measured by the robot foot force sensors.” Ishiguro 2020 further discloses in figures 1 and 3
a teleoperation control subsystem , communicatively coupled to the robot, and configured to receive the sensor data from [the] plurality of sensors, transmit the sensor data to a human operator, and output teleoperation actuator control signals to the plurality of actuators, wherein the teleoperation actuator data are generated according to input from the human operator;
Ishiguro 2020 specifies how the sensor data from the JAXON robot is transmitted to the human operator in the TABLIS teleoperation system. Additionally, the TABLIS takes the input of the human operator and outputs control signals to the actuators of the JAXON robot. Ishiguro 2020 further discloses in figure 17 and column 1 page 6425
wherein the autonomous control subsystem and the teleoperation control subsystem receive, from the robot, sensor data having the same data type, the same size, and the same frequency; and
where the sensor data of the robot (slave lleg act pos Z, fig 17(b)) having the same type, size, and frequency is sent to both the automatic foot contact control subsystem and TABLIS teleoperation control subsystem.
Thus, it would have been known to those of ordinary skill in the art to combine the JAXON robot of Kojima with the autonomous control subsystem and TABLIS teleoperation system of Ishiguro 2020. Low level teleoperation control of robot balance is very difficult for human operators. Having both systems on the robot is useful so the robot can autonomously move its feet to maintain balance while the human teleoperator uses low-level teleoperation to manipulate objects with the arms and hands.
Kojima does not teach the teleoperation data type. However, Ishiguro 2018 discloses in figures 2, 7, 8, and 11 and table 1
wherein the autonomous actuator data and the teleoperation actuator data have the same data type, the same size, and the same frequency; and
In this experiment the researchers sent the joint angles and ref ZMP from the human teleoperator to the JAXON robot to produce the motions of figure 7 and the measured displacements of table 1 and figure 11. Researchers also had the autonomous control subsystem “COM movement and footwork” send joint angles and ref ZMP having the same data type, the same size, and the same frequency to the JAXON robot to produce the motions of figure 8 and the measured displacements of table 1 and figure 11. Ishiguro 2018 further discloses in column 2 page 5838
a control-determining subsystem configured to determine which of the autonomous control subsystem and the teleoperation control subsystem controls the robot.
The determination to control the robot via teleoperation or autonomous control was described as “In the Fig.7, we disabled the COM movement and foot support change. On the other hand, we applied our proposed methods and the result is shown in the Fig.8.” It is not specified how exactly this determination was made. However, it is specified that the JAXON software system is constructed with RTM·ROS interoperation technology. Therefore, it would have been obvious to one of ordinary skill in the art to encode a RTM·ROS control-determining subsystem, such as a user checkbox on a GUI, to choose which control system is operational.
Thus, it would have been known to those of ordinary skill in the art to further combine the JAXON robot of Kojima with the TABLIS teleoperation data type and control-determining system of Ishiguro 2018 to successfully control operation of a robot in an environment. Having the data type of the TABLIS be the same as the data type of the autonomous control subsystem improves the performance of the JAXON, as the JAXON actuators only need to recognize and respond to a single data type. The addition of a control-determining system to the JAXON robot would allow the operator to easily choose between autonomous control or teleoperation.
Regarding Claim 13, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Ishiguro 2020 further discloses wherein the teleoperation system is a low-level teleoperation (LLT) system including hardware for transmitting sensor data from the robot to the human operator as sensory information and hardware for tracking the motion of the human operator. The TABLIS teleoperation system includes LLT functionality (“While using a low-level abstract command, the operator inputs their feet/COM force/position directly” column 1 page 6420), hardware to provide sensor data from the JAXON robot to the human operator as sensory feedback in the TABLIS (“The hand end-effector has 6-DOF wrench feedback from the slave side, but the foot end-effector is controlled by the quasi-3D floor reproduction instead, and only the Fx and Fy components of force are directly feedbacked.” column 2 page 6423), and hardware for tracking motion of the human operator (figures 4, 8, and 9).
Regarding Claim 17, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Ishiguro 2018 further discloses in figure 3 and column 2 page 5835 wherein the hardware includes a headset for providing visual and auditory information about the robot and the remote environment to the human operator where it details the use of the HTC Vive head mounted to display the audio and visual information about the JAXON and the remote environment to the human operator. A head mounted display is also used within the TABLIS system as seen in figure 1 of Ishiguro 2020.
Regarding Claim 19, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Kojima further discloses in column 2 of page 839 wherein the sensor data includes at least one of audio sensor data, joint position data , pressure data, force sensitive resistor data , mobile base wheel encoder data, inertial measurement unit data, and visual data where it is stated that the JAXON robot contains “6-axes force sensors in each sole and hand, posture sensors (IMU) in waist link, and Multisense SL in head link.”
Regarding Claim 20, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Ishiguro 2020 further discloses in figures 3, 8 and column 2 page 6423 wherein the actuator data includes at least one of audio data, joint position data, impedance data, and mobile base motion data where the TABLIS actuator data consists of at least; effector pose, head pose, COM pose and the “synchronization of the 6-DOF force/position of the feet of the operator/robot.”
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 1 above, and further in view of Gu et al (U.S. Patent Application 20090089700 A1 hereinafter “Gu”).
Regarding Claim 2, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 but does not teach the simulated robot in the simulated environment. However, Gu teaches in figure 5 and 0045 wherein the robot is interchangeable between a physical robot in a physical environment (106) and a simulated robot in a simulated environment (108) wherein:
the plurality of sensors of the physical robot comprise a plurality of physical sensors generating physical sensor data from the physical environment (118), and the plurality of actuators of the physical robot comprises a plurality of physical actuators receiving physical actuator data (116);
the plurality of sensors of the simulated robot comprises a plurality of simulated sensors generating simulated sensor data from the simulated environment (128), and the plurality of actuators of the simulated robot comprises a plurality of simulated actuators receiving simulated actuator data (130); wherein:
Furthermore, Gu discloses in 0045 and 0047
each simulated sensor is analogous to a respective physical sensor wherein the simulated sensor data is approximately the same as the respective physical sensor data; and
each simulated actuator is analogous to a respective physical actuator wherein the simulated actuator data is approximately the same as the respective physical actuator data.
where the mapping and coupling of signals is discussed. A simulated sensor/actuator is mapped to a respective physical sensor/actuator when the data is approximately the same. Therefore, it would have been known to one of ordinary skill in the art to combine the control system of Ishiguro 2018 with the physical and/or simulated robotic environment of Gu to achieve the proposed claim of autonomous and/or teleoperation robotic control of a physical and/or simulated robot in order to improve the training and reduce the wear and tear of robotic systems.
Claims 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 1 above, and further in view of Kakiuchi et al (Y. Kakiuchi, K. Kojima, E. Kuroiwa, S. Noda, M. Murooka, I. Kumagai, R. Ueda, F. Sugai, S. Nozawa, K. Okada, M. Inaba, "Development of humanoid robot system for disaster response through team NEDO-JSK's approach to DARPA Robotics Challenge Finals," 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea (South), 2015, pp. 805-810, doi: 10.1109/HUMANOIDS.2015.7363446. hereinafter “Kakiuchi”)
Regarding Claim 3, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 and Kakiuchi discloses in figure 8 and column 2 of page 808 wherein the autonomous control subsystem controls the robot with fully autonomous control, wherein artificial intelligence determines the autonomous actuator data where the Field Computer layer provides autonomous control via EusLisp and ROS. The Field Computer layer of Kojima is a base device and the claimed use of artificial intelligence can be seen as an improvement upon the known programming technique of EusLisp and ROS. Thus, one of ordinary skill in the art would have recognized that applying fully autonomous control via artificial intelligence to the Field Computer of Kojima would improve the system in a predictable manner.
Regarding Claim 4, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 and Kakiuchi discloses in figure 8 and column 1 of page 809 wherein the autonomous control subsystem controls the robot with semi-autonomous control, wherein a human operator determines the autonomous actuator data where it states “We aimed to develop tele-operation interface which can switch between autonomous robot behavior with operator’s suggestion and direct operation by a operator.” Thus, one of ordinary skill in the art would have recognized that applying semi-autonomous control determined by a human to the JAXON robot would improve performance in standard tasks.
Regarding Claim 5, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 and Kakiuchi discloses in figure 8 and column 1 of page 809 wherein the autonomous control subsystem includes a graphical user interface to display the sensor data for viewing by the human operator and an input device for receiving instructions from the human operator where it states “We aimed to develop tele-operation interface which can switch between autonomous robot behavior with operator’s suggestion and direct operation by a operator. Our user interface consists of a 2-D GUI, a 3-D GUI and input devices. They are in OCS layer.” Thus, one of ordinary skill in the art would have recognized that employing a GUI to display sensor data and receive instructions would make it easier for a teleoperator to control the JAXON robot with semi-autonomous control.
Claims 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 1 above, and in further view of Tiwara et al (U.S. Patent Application 20180267558 hereinafter “Tiwara”).
Regarding Claim 6, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 but does not teach an autonomous control subsystem with a feature extraction module. However, Tiwara teaches wherein the autonomous control subsystem includes a feature extraction module which is configured to receive at least one sensor data stream from the robot and convert the at least one sensor data stream to features, wherein features are semantically meaningful information. Tiwara pertains to the autonomous, semi-autonomous, or teleoperation of a vehicle and describes in 0052 “ In a specific example, the CPU 132 outputs an object localization dataset (e.g., foreign vehicle location, lane demarcation data, roadway shoulder location, etc.) computed from a data stream (e.g., video stream, image-like data stream, etc.) using a gradient-based feature extraction ruleset (e.g., histogram oriented gradients, edge-detection, etc.). However, the CPU 132 can output any suitable feature value.” Tiwara provides examples of suitable feature values, or semantically meaningful information in 0061 “extract features from the data (e.g., recognizing objects, classifying objects, tracking object trajectories, etc.)” Therefore, it would have been known to one of ordinary skill in the art to modify the autonomous control subsystem of Ishiguro 2018 with the feature extraction module of Tiwara to obtain semantically meaningful information to aid the teleoperator in the remote control of a robot.
Regarding Claim 7, Tiwara discloses all the limitations of claim 6, and Tiwara further discloses in 0061 wherein the features includes at least one of: location of detected objects, orientation of detected objects, labels of detected objects, mapping of the environment, text extracted from speech, text extracted from visual feed, facial recognition labels, presence of hand in the scene, joint states for actuators of the robot, and faces in a field of view where the specific features of “recognizing objects, classifying objects, tracking object trajectories, etc.” are discussed.
Regarding Claim 8, Tiwara discloses all the limitations of claim 6, and Tiwara further discloses in 0054 wherein the feature extraction module includes a plurality of specialized submodules to each extract a feature from the at least one sensor data stream. Tiwara employs a confidence metric to apply a score to the specialized submodules of the feature extraction module that extract features from at least one sensor data stream, such as the imaging subsystem and/or ranging subsystem.
Regarding Claim 9, Tiwara discloses all the limitations of claim 8, and Tiwara further discloses in 0054 further comprising an attention module which is configured to turn on and off at least one of the specialized submodules where a binary confidence metric score can be used to turn off/on at least one of the specialized submodules.
Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020, Ishiguro 2018, and Tiwara as applied to claim 6 above, and further in view of Farlow et al (U.S. Patent Application 20110288684 hereinafter “Farlow”).
Regarding Claim 10, Kojima in view of Ishiguro 2020, Ishiguro 2018, and Tiwara discloses all the limitations of claim 6 but does not teach a robot-perceived model of the environment, a simulated environment, nor the sensor and actuator data types. However, Farlow teaches in figures 13 and 18B and 0183 wherein the autonomous control subsystem (510) continuously generates a robot-perceived model of the environment (1820) of the robot (100) based on the features extracted from the robot sensor data (400) by the feature extraction module (600d or “reasoning software” of 0115). Additionally, Farlow details in 0195 how remote operators can attend a training course using a simulated robot that roams a simulated space (using simulator 1670). As described with claim 2 above, Gu discloses wherein the data type, data size, and data frequency of the sensor data and the actuator data are the same for a physical environment, a simulated environment, but does not teach this for the robot-egocentric model. However, Farlow teaches and the robot-egocentric model (1820). Therefore, it would have been known to one of ordinary skill in the art to apply the feature extraction module of the autonomous control subsystem of Ishiguro 2018 in view of Tiwara to the actuator and sensor data of same type/size/frequency of Ishiguro 2018 and Ishiguro 2020 to obtain the robot-perceived model of the environment of Farlow in order to achieve the overarching goal of improving training and reducing the wear and tear of robotic systems.
Regarding Claim 11, Kojima in view of Ishiguro 2020, Ishiguro 2018, Tiwara and Farlow discloses all the limitations of claim 10, and Farlow further discloses in 0199 wherein the autonomous control subsystem tests actuator data within the robot-egocentric model to determine the effects of an actuator data driven action before sending the actuator data to the robot. The autonomous robot control of Farlow includes a simulator 1670 to determine the effects of actuator driven actions, particularly with path planning: “The simulator 1670 may allow debugging and testing of applications 1610 without connectivity to the robot 100. The simulator 1670 can model or simulate operation of the robot 100 without actually communicating with the robot 100 (e.g., for path planning and accessing map databases). For executing simulations, in some implementations, the simulator 1670 produces a map database (e.g., from a layout map 1810) without using the robot 100. This may involve image processing (e.g., edge detection) so that features (like walls, corners, columns, etc.) are automatically identified. The simulator 1670 can use the map database to simulate path planning in an environment dictated by the layout map 1810.”
Regarding Claim 12, Kojima in view of Ishiguro 2020, Ishiguro 2018, Tiwara and Farlow discloses all the limitations of claim 10, and Farlow further discloses in 0172 wherein the autonomous control subsystem includes a concrete state representation updater which provides data about the current state of the remote environment as understood by the autonomous control subsystem. The autonomous robot control of Farlow includes autonomous navigation which stores and updates the current state of the remote environment as understood by the autonomous control subsystem in map 1820, which is a form of a concrete state representation.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 13 above, and in further view of Du et al (U.S. Patent Application 20240012412 hereinafter “Du”).
Regarding Claim 14, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all of the limitations of claim 13 but does not teach the use of haptic gloves. However, Du teaches wherein the hardware includes haptic gloves in 0070 where the use of haptic gloves (specifically HaptX brand gloves) for teleoperation is detailed. Therefore, it would have been known to one of ordinary skill in the art to modify the hand controllers of the TABLIS teleoperation system of Ishiguro 2020 with the HaptX haptic gloves of Du to enable the operator a finer level of control when remotely controlling a robot.
Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 13 above, and in further view of Reese et al (U.S. Patent Application 20220219314 hereinafter “Reese”).
Regarding Claim 15, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all of the limitations of claim 13, but does not teach the use of an exoskeleton that uses forward kinematics to determine position and orientation of the operator. However, Reese teaches in figures 1 and 27 and 0208 wherein the hardware includes an exoskeleton (1000) configured to use encoder measurements (2001) and forward kinematics to determine a position and an orientation of the limbs of the human operator. Therefore, it would have been known to one of ordinary skill in the art to modify the use of inverse kinematics to determine the operator position as in Ishiguro 2020 with the use of forward kinematics to determine the operator position as in Reese because it allows for more direct matching of limb pose between operator and robot.
Regarding Claim 16, Reese discloses all of the limitations of claim 15, and Reese further discloses in 0052 wherein the exoskeleton includes motors within the joints to provide force feedback to the human operator where it is stated “By the possibility of localizing the motors in the proximity of the joints means of force- and power-transmission are saved and the mechanism is simplified” and furthermore “The largely anthropomorphic behavior of the exoskeleton allows stable mounting to the user over large parts of its body and thus simplifies the generation of haptic feedback and the utilization of tactile in- and output units.” Therefore, it would have been known to one of ordinary skill in the art to modify the inverse kinematics exoskeleton of the TABLIS system of Ishiguro 2020 with the forward kinematics exoskeleton of Du to apply force feedback to multiple joints of the human operator’s limbs.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 13 above, and in further view of Gildert et al (U.S. Patent 10180733 hereinafter “Gildert”).
Regarding Claim 18, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all of the limitations of claim 13, but does not teach the use of pedals in teleoperation. However, Gildert teaches in figure 6A and 0097 wherein the hardware includes bidirectional pedals (604) to control a mobile base of the robot where it is stated “Force sensors 608 in response to force applied to the frame produce information that represents a first force component in a first direction with respect to the frame, and a second force component in a second direction with respect to the frame.” Therefore, it would have been known to one of ordinary skill in the art to modify the foot controllers of the TABLIS teleoperation system of Ishiguro 2020 with the pedals of Gildert to enable the operator to bidirectionally control the mobile base of a robot with their feet.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
• US 5086400 discloses combined autonomous and teleoperation control
• US 10824142 discloses determining and switching between autonomous and teleoperation control
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nathan Daniel Neckel whose telephone number is (571)272-9537. The examiner can normally be reached M-F, 7-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NATHAN DANIEL NECKEL/Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656