DETAILED ACTION
This Office Action is in response to the application filed on 01/28/2025, wherein claims 1-20 have been examined and are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) were submitted on 02/04/2026, 12/04/2025, 11/11/2025, 10/30/2025, 09/30/2025, 04/09/2025 and 02/27/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11/794,359. Although the claims at issue are not identical, they are not patentably distinct from each other as see below.
Application No. 19/038,997
Patent No. 11/794,359
Claim 1
A method for operating a remotely located robotic unit, comprising:
providing the remotely located robotic unit, comprising: a camera unit comprising at least one camera for capturing visual information and at least one three-dimensional camera for capturing three-dimensional depth information; a first robotic arm; and a second robotic arm; positioning the remotely located robotic unit proximate to an object; responsive to capturing the visual information using the at least one camera, causing display of the visual information on a display of a control system associated with the remotely located robotic unit;
responsive to capturing the three-dimensional depth information, creating a three- dimensional representation of the object based at least in part on the three- dimensional depth information;
receiving, from a user associated with the control system, an instruction for the remotely located robotic unit to perform an action, wherein the instruction is received from a head-mounted controller, a first hand- held controller, and a second hand-held controller capturing movement data of the user, and wherein the head-mounted controller controls the at least one camera and the at least one three-dimensional camera, the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm;
responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; and causing the remotely located robotic unit to perform the action.
Claim 15
A method of performing aerial work, the method comprising:
causing a capturing of sensory information from at least one capture device disposed on a robot unit,
wherein the sensory information comprise video captured from at least one camera and three-dimensional depth information captured from a three-dimensional depth camera,
wherein the at least one camera and the three-dimensional depth camera are mounted on a camera mount;
receiving the sensory information and creating a three-dimensional representation of an object based at least in part on the three-dimensional depth information;
causing display of the video captured from the at least one camera on a display of a control system;
receiving an instruction for the robot unit to perform an action from an inputting user associated with the control system,
wherein the instruction is received from a head mounted controller, a first hand-held controller, and a second hand-held controller capturing movement data of the inputting user, wherein the head mounted controller controls the camera mount, the first hand-held controller controls a first utility arm of the robot unit, and the second hand-held controller controls a second utility arm of the robot unit, wherein the movement data includes captured movement and positioning of at least one body part of the inputting user using the head mounted controller, first hand-held controller and the second hand-held controller;
responsive to receiving the instruction, comparing the instruction to the three-dimensional representation; and causing the robot unit to perform the action based at least in part on the instruction and a comparison of the instruction to the three-dimensional representation, wherein the robot unit performs the action to replicate or mimic the movement data of the inputting user.
Claim 8 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11/794,359 in view of Okamoto et al. (US 2007/0124024) hereinafter Okamoto.
Application No. 19/038,997
U.S. Patent No. 11/794,359
Claim 8
A method for operating a remotely located robotic unit comprising: providing the remotely located robotic unit, comprising:
a camera unit comprising at least one three-dimensional depth camera for capturing sensory information, wherein the sensory information comprises visual information; and at least one robotic arm;
positioning the remotely located robotic unit proximate to an object; capturing, by the at least one three-dimensional depth camera, the visual information; responsive to receiving, at a control system, the visual information, creating a three-dimensional representation of the object based at least in part on the visual information;
causing display of the visual information on a display associated with the control system;
receiving, from a user associated with the control system, an instruction for the remotely located robotic unit to perform an action; wherein the instruction is received from a head-mounted controller or at least one hand-held controller capturing movement data of the user, and wherein the head-mounted controller controls the at least one three-dimensional depth camera and the at least one hand-held controller controls the at least one robotic arm; responsive to receiving the instruction, comparing the instruction to the three- dimensional representation;
responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action;
responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action.
Claim 15
A method of performing aerial work, the method comprising:
causing a capturing of sensory information from at least one capture device disposed on a robot unit,
wherein the sensory information comprise video captured from at least one camera and three-dimensional depth information captured from a three-dimensional depth camera,
wherein the at least one camera and the three-dimensional depth camera are mounted on a camera mount;
receiving the sensory information and creating a three-dimensional representation of an object based at least in part on the three-dimensional depth information;
causing display of the video captured from the at least one camera on a display of a control system;
receiving an instruction for the robot unit to perform an action from an inputting user associated with the control system,
wherein the instruction is received from a head mounted controller, a first hand-held controller, and a second hand-held controller capturing movement data of the inputting user, wherein the head mounted controller controls the camera mount, the first hand-held controller controls a first utility arm of the robot unit, and the second hand-held controller controls a second utility arm of the robot unit, wherein the movement data includes captured movement and positioning of at least one body part of the inputting user using the head mounted controller, first hand-held controller and the second hand-held controller;
responsive to receiving the instruction, comparing the instruction to the three-dimensional representation; and causing the robot unit to perform the action based at least in part on the instruction and a comparison of the instruction to the three-dimensional representation, wherein the robot unit performs the action to replicate or mimic the movement data of the inputting user.
Claim 15 of Patent No. 11/794,359 does not explicit disclose responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action.
However, Okamoto discloses responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: instruction of the user transmitted from the operation terminal 403 is converted to a control command executable by the robot 50 with reference to the transporting information database 104 which stores the three-dimensional environment map. The robot moves to a destination and move an object as instructed as in [0170]-[0176]; Figs. 2K-2L, [0129], [0143], [0148]-[0151], [0161]: the three-dimensional model of the environment is used to determine movement of the robot to a destination B and determine where the robot cannot move such as the obstacles Tb, Ts, Bs and determine the movement route to avoid obstacles. Hence, the robot performs the action based on the instruction and comparison of the instruction to the three-dimensional representation and modify the instruction based on obstacles for the robot to perform modified action to avoid obstacles).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Patent No. 11/794,359, and further incorporate having responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action, as taught by Okamoto, to avoid collision with obstacle (Okamoto [0129]).
Claim 15 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11/794,359 in view of Saroha et al. (US 2019/0076092) hereinafter Saroha.
Application No. 19/038,997
U.S. Patent No. 11/794,359
Claim 15
A method for operating a remotely located robotic unit comprising: providing the remotely located robotic unit, comprising:
a movable camera mount; at least one camera to capture visual information and at least one three- dimensional camera for capturing three-dimensional depth information disposed on the movable camera mount; and at least one robotic arm;
positioning the remotely located robotic unit proximate to an object; capturing sensory information from the at least one camera and the at least one three-dimensional camera, wherein the sensory information comprises the visual information and the three- dimensional depth information;
receiving, at a control system, the sensory information; responsive to receiving the sensory information, creating a three-dimensional representation of the object based at least in part on the three-dimensional depth information; causing display of the visual information captured from the at least one camera on a display associated with the control system;
receiving, from a first user associated with the control system, via a head-mounted controller or at least one hand-held controller capturing movement data of the first user, an instruction for the remotely located robotic unit to perform an action; wherein the head-mounted controller controls the movable camera mount and the at least one hand-held controller controls the at least one robotic arm;
responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user.
Claim 15
A method of performing aerial work, the method comprising:
causing a capturing of sensory information from at least one capture device disposed on a robot unit,
wherein the sensory information comprise video captured from at least one camera and three-dimensional depth information captured from a three-dimensional depth camera,
wherein the at least one camera and the three-dimensional depth camera are mounted on a camera mount;
receiving the sensory information and creating a three-dimensional representation of an object based at least in part on the three-dimensional depth information;
causing display of the video captured from the at least one camera on a display of a control system;
receiving an instruction for the robot unit to perform an action from an inputting user associated with the control system,
wherein the instruction is received from a head mounted controller, a first hand-held controller, and a second hand-held controller capturing movement data of the inputting user, wherein the head mounted controller controls the camera mount, the first hand-held controller controls a first utility arm of the robot unit, and the second hand-held controller controls a second utility arm of the robot unit, wherein the movement data includes captured movement and positioning of at least one body part of the inputting user using the head mounted controller, first hand-held controller and the second hand-held controller;
responsive to receiving the instruction, comparing the instruction to the three-dimensional representation; and causing the robot unit to perform the action based at least in part on the instruction and a comparison of the instruction to the three-dimensional representation, wherein the robot unit performs the action to replicate or mimic the movement data of the inputting user.
Claim 15 of Patent No. 11/794,359 does not explicit disclose responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user.
However, Saroba discloses responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user (Saroha [0043]: control system to control handheld devices or robots; Figs. 1 and 6B, [0044]: control steering device 110 by a steering controller 118, i.e. handheld controller, which can be joystick which receives user inputs, i.e. first user, indication of desired movement of distal portion 104 of the device 102; [0034]: the steering device can also be controlled by a remote user, i.e. second user, in a separate room, or across the world with multi-level actuation controls; Fig. 7, [0057]-[0058], [0061], [0086]-[0089], [0065], [0068], [0077]: when a console override is activated, handheld operation of the steering device 110 is disabled or deactivated and the steering device is controlled by the second user using the console).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Patent No. 11/794,359, and further incorporate having responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user, as taught by Saroha, for multi-level control of the device to improve control and functionality of the device (Saroha [0043]-[0044], [0060]).
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 12/240,106. Although the claims at issue are not identical, they are not patentably distinct from each other as see below.
Application No. 19/038,997
Patent No. 12/240,106
Claim 1
A method for operating a remotely located robotic unit, comprising:
providing the remotely located robotic unit, comprising: a camera unit comprising at least one camera for capturing visual information and at least one three-dimensional camera for capturing three-dimensional depth information; a first robotic arm; and a second robotic arm; positioning the remotely located robotic unit proximate to an object;
responsive to capturing the visual information using the at least one camera, causing display of the visual information on a display of a control system associated with the remotely located robotic unit;
responsive to capturing the three-dimensional depth information, creating a three- dimensional representation of the object based at least in part on the three- dimensional depth information;
receiving, from a user associated with the control system, an instruction for the remotely located robotic unit to perform an action, wherein the instruction is received from a head-mounted controller, a first hand- held controller, and a second hand-held controller capturing movement data of the user, and wherein the head-mounted controller controls the at least one camera and the at least one three-dimensional camera, the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm;
responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; and causing the remotely located robotic unit to perform the action.
Claim 8
A robotic system comprising: a robot unit comprising:
a base; a first utility arm; a second utility arm; at least one camera configured to capture visual information; at least one depth camera configured to capture three-dimensional depth information; and a control unit comprising: a display for displaying the visual information; at least one controller for inputting instructions for the robot unit; at least one processor; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to:
cause display of the visual information on the display; receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information;
receive a control mode instruction to select a control mode of a plurality of control modes; update the display based at least in part on the control mode;
receive a user instruction for the robot unit to perform an action, wherein the user instruction comprises a user movement that is input via the at least one controller, and wherein the action is a replicated action of the user movement;
responsive to receiving the user instruction, perform a comparison of the user instruction to the three-dimensional representation, modify, based on the comparison of the user instruction to the three-dimensional representation, the user instruction to obtain a modified action; and cause the robot unit to perform the modified action based at least in part on the control mode.
Claim 8 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 12/240,106 in view of Okamoto et al. (US 2007/0124024) hereinafter Okamoto.
Application No. 19/038,997
U.S. Patent No. 12/240,106
Claim 8
A method for operating a remotely located robotic unit comprising: providing the remotely located robotic unit, comprising:
a camera unit comprising at least one three-dimensional depth camera for capturing sensory information, wherein the sensory information comprises visual information; and at least one robotic arm;
positioning the remotely located robotic unit proximate to an object; capturing, by the at least one three-dimensional depth camera, the visual information;
responsive to receiving, at a control system, the visual information, creating a three-dimensional representation of the object based at least in part on the visual information; causing display of the visual information on a display associated with the control system;
receiving, from a user associated with the control system, an instruction for the remotely located robotic unit to perform an action; wherein the instruction is received from a head-mounted controller or at least one hand-held controller capturing movement data of the user, and wherein the head-mounted controller controls the at least one three-dimensional depth camera and the at least one hand-held controller controls the at least one robotic arm;
responsive to receiving the instruction, comparing the instruction to the three- dimensional representation;
responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action;
responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action.
Claim 8
A robotic system comprising: a robot unit comprising:
a base; a first utility arm; a second utility arm; at least one camera configured to capture visual information; at least one depth camera configured to capture three-dimensional depth information; and a control unit comprising: a display for displaying the visual information; at least one controller for inputting instructions for the robot unit; at least one processor; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to:
cause display of the visual information on the display; receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information;
receive a control mode instruction to select a control mode of a plurality of control modes; update the display based at least in part on the control mode;
receive a user instruction for the robot unit to perform an action, wherein the user instruction comprises a user movement that is input via the at least one controller, and wherein the action is a replicated action of the user movement;
responsive to receiving the user instruction, perform a comparison of the user instruction to the three-dimensional representation, modify, based on the comparison of the user instruction to the three- dimensional representation, the user instruction to obtain a modified action; and cause the robot unit to perform the modified action based at least in part on the control mode.
Claim 8 of Patent No. 12/240,106 does not explicit disclose responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action.
However, Okamoto discloses responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: instruction of the user transmitted from the operation terminal 403 is converted to a control command executable by the robot 50 with reference to the transporting information database 104 which stores the three-dimensional environment map. The robot moves to a destination and move an object as instructed as in [0170]-[0176]; Figs. 2K-2L, [0129], [0143], [0148]-[0151], [0161]: the three-dimensional model of the environment is used to determine movement of the robot to a destination B and determine where the robot cannot move such as the obstacles Tb, Ts, Bs and determine the movement route to avoid obstacles. Hence, the robot performs the action based on the instruction and comparison of the instruction to the three-dimensional representation and modify the instruction based on obstacles for the robot to perform modified action to avoid obstacles).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Patent No. 12/240,106, and further incorporate having responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action, as taught by Okamoto, to avoid collision with obstacle (Okamoto [0129]).
Claim 15 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 12/240,106 in view of Saroha et al. (US 2019/0076092) hereinafter Saroha.
Application No. 19/038,997
U.S. Patent No. 12/240,106
Claim 15
A method for operating a remotely located robotic unit comprising: providing the remotely located robotic unit, comprising:
a movable camera mount; at least one camera to capture visual information and at least one three- dimensional camera for capturing three-dimensional depth information disposed on the movable camera mount; and at least one robotic arm;
positioning the remotely located robotic unit proximate to an object; capturing sensory information from the at least one camera and the at least one three-dimensional camera, wherein the sensory information comprises the visual information and the three- dimensional depth information;
receiving, at a control system, the sensory information; responsive to receiving the sensory information, creating a three-dimensional representation of the object based at least in part on the three-dimensional depth information; causing display of the visual information captured from the at least one camera on a display associated with the control system;
receiving, from a first user associated with the control system, via a head-mounted controller or at least one hand-held controller capturing movement data of the first user, an instruction for the remotely located robotic unit to perform an action; wherein the head-mounted controller controls the movable camera mount and the at least one hand-held controller controls the at least one robotic arm;
responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user.
Claim 8
A robotic system comprising: a robot unit comprising:
a base; a first utility arm; a second utility arm; at least one camera configured to capture visual information; at least one depth camera configured to capture three-dimensional depth information; and a control unit comprising: a display for displaying the visual information; at least one controller for inputting instructions for the robot unit; at least one processor; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to:
cause display of the visual information on the display; receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information;
receive a control mode instruction to select a control mode of a plurality of control modes; update the display based at least in part on the control mode;
receive a user instruction for the robot unit to perform an action, wherein the user instruction comprises a user movement that is input via the at least one controller, and wherein the action is a replicated action of the user movement;
responsive to receiving the user instruction, perform a comparison of the user instruction to the three-dimensional representation, modify, based on the comparison of the user instruction to the three- dimensional representation, the user instruction to obtain a modified action; and cause the robot unit to perform the modified action based at least in part on the control mode.
Claim 8 of Patent No. 12/240,106 does not explicit disclose responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user.
However, Saroba discloses responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user (Saroha [0043]: control system to control handheld devices or robots; Figs. 1 and 6B, [0044]: control steering device 110 by a steering controller 118, i.e. handheld controller, which can be joystick which receives user inputs, i.e. first user, indication of desired movement of distal portion 104 of the device 102; [0034]: the steering device can also be controlled by a remote user, i.e. second user, in a separate room, or across the world with multi-level actuation controls; Fig. 7, [0057]-[0058], [0061], [0086]-[0089], [0065], [0068], [0077]: when a console override is activated, handheld operation of the steering device 110 is disabled or deactivated and the steering device is controlled by the second user using the console).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Patent No. 12/240,106, and further incorporate having responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user, as taught by Saroha, for multi-level control of the device to improve control and functionality of the device (Saroha [0043]-[0044], [0060]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2-3, 5, 7 and are rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Taylor et al. (US 2015/0312468) hereinafter Taylor.
Regarding claim 1, Okamoto discloses a method for operating a remotely located robotic unit, comprising:
providing the remotely located robotic unit, comprising: a camera unit comprising at least one camera for capturing visual information and at least one three-dimensional camera for capturing three-dimensional depth information; a first robotic arm (Okamoto [0144], [0147]: robot 50 having a main body 53, i.e. base, to mount the holding device 50, i.e. robot unit, at a distal end of an arm, and four wheels 506 for moving the robot; Fig. 8, [0148], [0150]: a pair of cameras 502 mounted on the robot unit to recognize objects, hence there will be a camera; Figs. 2J-2L, [0129]: three-dimensional model of the environment can be generated, [0149]: sensors can be used to determine distance to obstacles);
positioning the remotely located robotic unit proximate to an object; responsive to capturing the visual information using the at least one camera, causing display of the visual information on a display of a control system associated with the remotely located robotic unit (Okamoto Fig. 2A, [0121], [0143], [0145]: moves the robot in the environment and proximate object to perform picking and transporting objects; Fig. 2C, [0178], [0180]-[0181]: the operation terminal 403 includes a display device for display operation screen based on data of the image of the environment taken by the camera 502);
responsive to capturing the three-dimensional depth information, creating a three- dimensional representation of the object based at least in part on the three-dimensional depth information (Figs. 2J-2L, [0129]: three-dimensional model of the environment is generated; [0208]-[0212]: cameras 502 can take image of the floor face and four-dimensional vector of the floor face can be generated to categorize the floor);
receiving, from a user associated with the control system, an instruction for the remotely located robotic unit to perform an action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: operation terminal 403 is a user interface to receive instruction from the user to operate the robot),
responsive to receiving the instruction, comparing the instruction to the three-dimensional representation; and causing the remotely located robotic unit to perform the action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: instruction of the user transmitted from the operation terminal 403 is converted to a control command executable by the robot 50 with reference to the transporting information database 104 which stores the three-dimensional environment map. The robot moves to a destination and move an object as instructed as in [0170]-[0176]; Figs. 2K-2L, [0129]: the three-dimensional model of the environment is used to determine movement of the robot to a destination B and determine where the robot cannot move such as the obstacles Tb, Ts, Bs. Hence, the robot performs the action based on the instruction and comparison of the instruction to the three-dimensional representation).
Okamoto broadly disclose at least one depth camera disposed on the camera mount to capture three-dimensional depth information, and receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information as discussed above.
Furthermore, Perkins discloses at least one depth camera disposed on the camera mount to capture three-dimensional depth information, and receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information (Perkins Fig. 1A, [0043]: robot system having sensor 170 mounted on arm 160; [0044]: the sensor 170 comprises camera 172 and depth sensor 172b which data is used to generate three-dimensional depth information).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto, and further incorporate having at least one depth camera disposed on the camera mount to capture three-dimensional depth information, and receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information, as taught by Perkins, to accurately detect objects in the environment (Perkins [0021]).
Okamoto does not explicitly disclose second robotic arm; wherein the instruction is received from a head-mounted controller, a first hand-held controller, and a second hand-held controller capturing movement data of the user, and wherein the head-mounted controller controls the at least one camera and the at least one three-dimensional camera, the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm.
However, Jinno discloses second robotic arm; wherein the instruction is received from a first hand-held controller, and a second hand-held controller capturing movement data of the user, and wherein the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm (Jinno Fig. 1, ]0043]: a robot system having arms 24a, 24b, 24c, wherein the arms can have six degrees of freedom as in [0064], [0068]; [0067]-[0070]: left and right joysticks 38a, 38b to control the arms 24a and 24b).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins, and further incorporate having second robotic arm; wherein the instruction is received from a first hand-held controller, and a second hand-held controller capturing movement data of the user, and wherein the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm, as taught by Jinno, to control the arms individually for multiple tasks at a time (Jinno [0067]).
Okamoto does not explicitly disclose wherein the instruction is received from a head-mounted controller, and wherein the head-mounted controller controls the at least one camera and the at least one three-dimensional camera.
However, Taylor discloses wherein the instruction is received from a head-mounted controller, and wherein the head-mounted controller controls the at least one camera and the at least one three-dimensional camera (Taylor Figs. 2-3, [0051]-[0052]: a head mounted display HMD 205 includes a display to display video from camera as in [0064]; [0051]-[0053], [0055]-[0059]: a head mounted display HMD 205 includes a sensor; [0059]: the sensor on the HMD determines the orientations of the HMD including angle and movement. the HMD can transmit and receive signal including orientation information of the HMD to a controller to control the camera mounting arm based on the position of the HMD. Controlling a remote camera system having first and second cameras 320A and 320B by head rotation based on the tilt angle of the head detected by the sensor. The camera system includes a camera mount arm 305 and 310 as in Fig. 3A, [0053]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno, and further incorporate having wherein the instruction is received from a head-mounted controller, and wherein the head-mounted controller controls the at least one camera and the at least one three-dimensional camera, as taught by Taylor, for the user to control the camera position conveniently based on the user’s head position to see desired view (Taylor [0051]-[0053]).
Regarding claim 2, Okamoto and Perkins and Jinno and Taylor disclose all limitations of claim 1.
Okamoto discloses wherein the remotely located robotic unit is positioned at a distal end of a boom assembly (Okamoto Fig. 8, [0144]: robot 50 having a main body 53, i.e. base, to mount the holding device 50, i.e. robot unit, at a distal end of an arm).
Regarding claim 3, Okamoto and Perkins and Jinno and Taylor disclose all limitations of claim 1.
Okamoto discloses wherein the three-dimensional representation is a point cloud of the object.
However, Perkins discloses wherein the three-dimensional representation is a point cloud of the object (Perkins Fig. 1A, [0043]: robot system having sensor 170 mounted on arm 160; [0044]: the sensor 170 comprises camera 172 and depth sensor 172b which data is used to generate three-dimensional volumetric point cloud; [0066]: computer can be used to implement the system and image processing).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Taylor, and further incorporate having the three-dimensional representation is a point cloud of the object, as taught by Perkins, to accurately detect objects in the environment (Perkins [0021]).
Regarding claim 5, Okamoto and Perkins and Jinno and Taylor disclose all limitations of claim 1.
Okamoto discloses wherein comparing the instruction to the three-dimensional representation comprises determining a distance the first robotic arm or the second robotic arm must move to perform the action (Okamoto [0149]: sensors can be used to determine distance to obstacles; [0165], [0117]: determine route that approaches the objects such as equipment up to a predetermined distance).
Regarding claim 7, Okamoto and Perkins and Jinno and Taylor disclose all limitations of claim 1.
Okamoto discloses wherein the first robotic arm comprise a manipulator located at a distal end thereof, and wherein the method further comprises causing the first robotic arm to couple the manipulator to a tool (Okamoto Fig. 8, [0145]-[0146], [0152]: multi-joint arm 51 and a hand 52 arranged at the distal end of the arm 51 by joint P2 to hold article; [0143]: light source 503 is also attached to joint P2 of the robot arm).
Jinno discloses the first robotic arm and the second robotic arm comprise a manipulator located at a distal end thereof, and wherein the method further comprises causing the first robotic arm or the second robotic arm to couple the manipulator to a tool (Jinno Figs. 1, 3 and 9, [0042], [0045], [0049], [0069]: the first and second forceps arms 24a and 24b have manipulators 12a and 12b at their distal ends which have gripper 68 or distal-end working unit 56, and camera arm 24c has an endoscope 14 at its distal end, hence a tool).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Taylor, and further incorporate having the first robotic arm and the second robotic arm comprise a manipulator located at a distal end thereof, and wherein the method further comprises causing the first robotic arm or the second robotic arm to couple the manipulator to a tool, as taught by Jinno, to perform different tasks using the robotic arms (Jinno [0049]).
Claim 4 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Taylor et al. (US 2015/0312468) hereinafter Taylor, further in view of Chestnutt et al. (US 2021/0323153) hereinafter Chestnutt.
Regarding claim 4, Okamoto and Perkins and Jinno and Taylor disclose all limitations of claim 1.
Okamoto does not explicitly disclose wherein the at least one camera includes a plurality of cameras positioned to capture video of a field of view greater than 220 degrees.
Chestnutt discloses the at least one camera includes a plurality of cameras positioned to capture video of a field of view greater than 220 degrees (Chestnutt Fig. 1, [0031]-[0032]: robot 10 having arm 20 and vision system 30 including multiple cameras 31 to capture 360-degree field of view around the robot 10).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Taylor, and further incorporate having the at least one camera includes a plurality of cameras positioned to capture video of a field of view greater than 220 degrees, as taught by Chestnutt, to view all around the robot (Chestnutt [0032]).
Claim 6 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Taylor et al. (US 2015/0312468) hereinafter Taylor, further in view of Budu et al. (US 2023/0399136) hereinafter Budu.
Regarding claim 6, Okamoto and Perkins and Jinno and Taylor disclose all limitations of claim 1.
Okamoto does not explicitly disclose wherein the remotely located robotic unit further comprises a weight estimator sensor and the method further comprises: causing the weight estimator sensor to capture at least one image of the object; generating a point cloud based on the at least one image of the object; estimating a weight of the object using the point cloud; and communicating the weight of the object to the user.
However, Budu discloses wherein the remotely located robotic unit further comprises a weight estimator sensor and the method further comprises: causing the weight estimator sensor to capture at least one image of the object; generating a point cloud based on the at least one image of the object; estimating a weight of the object using the point cloud; and communicating the weight of the object to the user (Budu [0009], [0018], [0118], [0175], [0179]-[0181]: determine weight of object based on determined type/species and estimated size of object, wherein the size is estimated using point cloud analysis; [0125], [0127], [0140]-[0141]: depth sensing sensor such as LiDAR sensor or RGB-D camera can be used; [0153]: estimate weight of object from 3D Pointcloud data using depth sensors; [0111], [0114]: if the weight determined by the controller by visual inspection with the camera is different with the determined weight by weight sensor, the controller flags an error condition and requests intervention from a user of the system, hence, communicating the weight to the user).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Taylor, and further incorporate having a weight estimator sensor and the method further comprises: causing the weight estimator sensor to capture at least one image of the object; generating a point cloud based on the at least one image of the object; estimating a weight of the object using the point cloud; and communicating the weight of the object to the user, as taught by Budu, to improve object detection and classification (Budu [0003], [0105]).
Claims 8 and 12 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844).
Regarding claim 8, Okamoto discloses a method for operating a remotely located robotic unit comprising:
providing the remotely located robotic unit, comprising: a camera unit comprising at least one three-dimensional depth camera for capturing sensory information, wherein the sensory information comprises visual information; and at least one robotic arm (Okamoto [0144], [0147]: robot 50 having a main body 53, i.e. base, to mount the holding device 50, i.e. robot unit, at a distal end of an arm, and four wheels 506 for moving the robot; Fig. 8, [0148], [0150]: a pair of cameras 502 mounted on the robot unit to recognize objects; Figs. 2J-2L, [0129]: three-dimensional model of the environment can be generated, [0149]: sensors can be used to determine distance to obstacles);
positioning the remotely located robotic unit proximate to an object; capturing, by the at least one three-dimensional depth camera, the visual information (Okamoto Fig. 2A, [0121], [0143], [0145]: moves the robot in the environment and proximate object to perform picking and transporting objects; Fig. 8, [0148], [0150]: a pair of cameras 502 mounted on the robot unit to recognize objects);
responsive to receiving, at a control system, the visual information, creating a three-dimensional representation of the object based at least in part on the visual information (Okamoto Figs. 2J-2L, [0129]: three-dimensional model of the environment can be generated);
causing display of the visual information on a display associated with the control system (Okamoto Fig. 2C, [0178], [0180]-[0181]: the operation terminal 403 includes a display device for display operation screen based on data of the image of the environment taken by the camera 502);
receiving, from a user associated with the control system, an instruction for the remotely located robotic unit to perform an action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: operation terminal 403 is a user interface to receive instruction from the user to operate the robot);
responsive to receiving the instruction, comparing the instruction to the three- dimensional representation; responsive to comparing the instruction to the three-dimensional representation, determining that there is an obstacle to performing the action; responsive to determining that there is the obstacle to performing the action, modifying the instruction; and causing the remotely located robotic unit to perform a modified action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: instruction of the user transmitted from the operation terminal 403 is converted to a control command executable by the robot 50 with reference to the transporting information database 104 which stores the three-dimensional environment map. The robot moves to a destination and move an object as instructed as in [0170]-[0176]; Figs. 2K-2L, [0129], [0143], [0148]-[0151], [0161]: the three-dimensional model of the environment is used to determine movement of the robot to a destination B and determine where the robot cannot move such as the obstacles Tb, Ts, Bs and determine the movement route to avoid obstacles. Hence, the robot performs the action based on the instruction and comparison of the instruction to the three-dimensional representation and modify the instruction based on obstacles for the robot to perform modified action to avoid obstacles).
Okamoto broadly disclose camera unit comprising at least one three-dimensional depth camera for capturing sensory information, wherein the sensory information comprises visual information as discussed above.
Furthermore, Perkins discloses camera unit comprising at least one three-dimensional depth camera for capturing sensory information, wherein the sensory information comprises visual information (Perkins Fig. 1A, [0043]: robot system having sensor 170 mounted on arm 160; [0044]: the sensor 170 comprises camera 172 and depth sensor 172b which data is used to generate three-dimensional depth information).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto, and further incorporate having camera unit comprising at least one three-dimensional depth camera for capturing sensory information, wherein the sensory information comprises visual information, as taught by Perkins, to accurately detect objects in the environment (Perkins [0021]).
Okamoto does not explicitly disclose wherein the instruction is received from a head-mounted controller or at least one hand-held controller capturing movement data of the user, and wherein the head-mounted controller controls the at least one three-dimensional depth camera and the at least one hand-held controller controls the at least one robotic arm.
However, Jinno discloses the instruction is received from at least one hand-held controller capturing movement data of the user, and wherein the at least one hand-held controller controls the at least one robotic arm (Jinno Fig. 1, ]0043]: a robot system having arms 24a, 24b, 24c, wherein the arms can have six degrees of freedom as in [0064], [0068]; [0067]-[0070]: left and right joysticks 38a, 38b to control the arms 24a and 24b).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins, and further incorporate having the instruction is received from at least one hand-held controller capturing movement data of the user, and wherein the at least one hand-held controller controls the at least one robotic arm, as taught by Jinno, to control the arms individually for multiple tasks at a time (Jinno [0067]).
Regarding claim 12, Okamoto and Perkins and Jinno disclose all limitations of claim 8.
Okamoto discloses wherein the remotely located robotic unit further comprises at least one sensor selected from a group consisting of a gyroscope, an accelerometer, a thermometer, a barometer, a light emitter, a voltage detector, a weight- detection sensor, a QR reader, a magnetometer, a pose sensor, and a rotary encoder (Okamoto Figs. 2G and 8, [0152]: a hand 52 is connected to the robot arm at a distal end by contacting joint 52, i.e. adapter; [0143]: light source 503 is also attached to joint P2 of the robot arm. Hence, the adapter is configured to equip multiple tools to perform plurality of functions including grapping object and illuminating light).
Claims 9 and 11 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Robertson (US 2022/0032476).
Regarding claim 9, Okamoto and Perkins and Jinno disclose all limitations of claim 8.
Okamoto discloses wherein the at least one robotic arm comprises a first robotic arm, and wherein the method further comprises: causing a first tool to be coupled to a first manipulator located at a first distal end of the first robotic arm (Okamoto Fig. 8, [0145]-[0146], [0152]: multi-joint arm 51 and a hand 52 arranged at the distal end of the arm 51 by joint P2 to hold article; [0143]: light source 503 is also attached to joint P2 of the robot arm).
Okamoto does not explicitly disclose a second robotic arm, and a tool holder, and wherein the method further comprises: selecting, by the at least one hand-held controller, from a plurality of tools stored in the tool holder, a first tool; and causing the first tool to be coupled to a first manipulator located at a first distal end of the first robotic arm.
Robertson discloses a second robotic arm, and a tool holder, and wherein the method further comprises: selecting, from a plurality of tools stored in the tool holder, a first tool; and causing the first tool to be coupled to a first manipulator located at a first distal end of the first robotic arm (Robertson Figs. 1-6, [0065]-[0082], [0012], [0035]: a tool changer 500 having tooling basket 502 comprising plurality of mounts to receives tool attachments 200 which can be engaged by the gripper module 106 of the robotic manipulator 100 of two robotic arms for tool change; [0016]: the tool attachment can be engaged, and disengaged, by an end effector of the robotic arm which is controlled remotely by a user).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system, as disclosed by Okamoto and Perkins, and further incorporate having a second robotic arm, and a tool holder, and wherein the method further comprises: selecting, from a plurality of tools stored in the tool holder, a first tool; and causing the first tool to be coupled to a first manipulator located at a first distal end of the first robotic arm, as taught by Robertson, for convenient storage, release, and faster and remote changing of plurality of tools when needed (Robertson [0033], [0014]).
Okamoto does not explicitly disclose selecting the tool by the at least one hand-held controller.
However, Jinno discloses wherein control instruction is received from a first hand-held controller, and a second hand-held controller, and wherein the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm (Jinno Fig. 1, [0068]; [0067]-[0070]: left and right joysticks 38a, 38b to control the arms 24a and 24b).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Robertson, and further incorporate having control instruction is received from a first hand-held controller, and a second hand-held controller, and wherein the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm, as taught by Jinno, wherein the control instruction include selecting the tool for changing, as disclosed by Robertson, to control the arms individually and remotely for different tasks (Jinno [0067]).
Regarding claim 11, Okamoto and Perkins and Jinno and Robertson disclose all limitations of claim 9.
Okamoto does not explicitly disclose selecting, by the at least one hand-held controller, from the plurality of tools stored in the tool holder, a second tool; causing the second tool to be coupled to a second manipulator located at a second distal end of the second robotic arm.
Robertson discloses selecting, from the plurality of tools stored in the tool holder, a second tool; causing the second tool to be coupled to a second manipulator located at a second distal end of the second robotic arm (Robertson Figs. 1-6, [0065]-[0082], [0012], [0035]: a tool changer 500 having tooling basket 502 comprising plurality of mounts to receives tool attachments 200 which can be engaged by the gripper module 106 of the robotic manipulator 100 of two robotic arms for tool change; [0016]: the tool attachment can be engaged, and disengaged, by an end effector of the robotic arm which is controlled remotely by a user).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system, as disclosed by Okamoto and Perkins and Jinno, and further incorporate having selecting, from the plurality of tools stored in the tool holder, a second tool; causing the second tool to be coupled to a second manipulator located at a second distal end of the second robotic arm, as taught by Robertson, for convenient storage, release, and faster and remote changing of plurality of tools when needed (Robertson [0033], [0014]).
Okamoto does not explicitly disclose selecting the second tool by the at least one hand-held controller.
However, Jinno discloses wherein control instruction is received from a first hand-held controller, and a second hand-held controller, and wherein the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm (Jinno Fig. 1, [0068]; [0067]-[0070]: left and right joysticks 38a, 38b to control the arms 24a and 24b).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Robertson, and further incorporate having control instruction is received from a first hand-held controller, and a second hand-held controller, and wherein the first hand-held controller controls the first robotic arm, and the second hand-held controller controls the second robotic arm, as taught by Jinno, wherein the control instruction include selecting the second tool for changing, as disclosed by Robertson, to control the arms individually and remotely for different tasks (Jinno [0067]).
Claims 9 and 11 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Robertson (US 2022/0032476), further in view of Virk (US 2020/0278660).
Regarding claim 10, Okamoto and Perkins and Jinno and Robertson disclose all limitations of claim 9.
Okamoto does not explicitly disclose removing the first tool from the first manipulator; accessing a database of locations of each of the plurality of tools on the tool holder; locating, from the database of the locations, a second tool; causing the second tool to be coupled to the first manipulator located at the first distal end of the first robotic arm.
Okamoto discloses removing the first tool from the first manipulator; locating a second tool; causing the second tool to be coupled to the first manipulator located at the first distal end of the first robotic arm (Robertson Figs. 1-6, [0065]-[0082], [0012], [0035]: a tool changer 500 having tooling basket 502 comprising plurality of mounts to receives tool attachments 200 which can be engaged by the gripper module 106 of the robotic manipulator 100 of two robotic arms for tool change; [0016]: the tool attachment can be engaged, and disengaged, by an end effector of the robotic arm which is controlled remotely by a user).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system, as disclosed by Okamoto and Perkins and Jinno and Robertson, and further incorporate removing the first tool from the first manipulator; locating a second tool; causing the second tool to be coupled to the first manipulator located at the first distal end of the first robotic arm, as taught by Robertson, for convenient storage, release, and faster and remote changing of plurality of tools when needed (Robertson [0033], [0014]).
Okamoto does not explicitly disclose accessing a database of locations of each of the plurality of tools on the tool holder; locating, from the database of the locations, a second tool.
However, Virk discloses accessing a database of locations of each of the plurality of tools on the tool holder; locating, from the database of the locations, a second tool (Virk [0033]: tooling transport device which can be robotic material handling device that can move between any position needed to acquire, transport, and deposit tools needed to place or position a tool on a tooling positioning device in a workspace, hence can deposit tools to a device such as a robot. The tooling transport devices can incorporate end effectors, grippers to move tools; [0084]: controller query database records to determine the current location and availability of each tools requested).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system, as disclosed by Okamoto and Perkins and Jinno and Robertson, and further incorporate accessing a database of locations of each of the plurality of tools on the tool holder; locating, from the database of the locations, a second tool, as taught by Virk, for optimal tools arrangement, locating, transmitting and depositing of desired tools (Virk [0002], [0033], [0084]).
Claim 13 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), in view of Saroha et al. (US 2019/0076092) hereinafter Saroha, further in view of Prior et al. (US 2021/0282871) hereinafter Prior.
Regarding claim 13, Okamoto and Perkins and Jinno disclose all limitations of claim 8.
Okamoto does not explicitly disclose wherein the at least one hand-held controller comprises a selectable button that is used to selectively switch to an observer mode, and wherein the method further comprises: responsive to receiving input via the selectable button, switching into the observer mode; providing the sensory information to the user; receiving a second instruction; and preventing the second instruction from being sent to the control system.
Saroha discloses the at least one hand-held controller comprises a selectable button that is used to selectively switch to an observer mode, and wherein the method further comprises: responsive to receiving input via the selectable button, switching into the observer mode; providing the sensory information to the user; receiving a second instruction; and preventing the second instruction from being sent to the control system (Saroha [0043]: control system to control handheld devices or robots; Figs. 1 and 6B, [0044]-[0045]: control steering device 110 by a steering controller 118, i.e. handheld controller, which can be joystick having buttons which receives user inputs, i.e. first user, indication of desired movement of distal portion 104 of the device 102; [0034]: the steering device can also be controlled by a remote user, i.e. second user, in a separate room, or across the world with multi-level actuation controls; [0051], [0086]: user interface for display; Fig. 7, [0057]-[0058], [0061], [0086]-[0089], [0065], [0068], [0077]: when a console override is activated, handheld operation of the steering device 110 is disabled or deactivated and the steering device is controlled by the second user using the console, hence preventing second instruction from being sent to the control system).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Saroha, and further incorporate having the at least one hand-held controller comprises a selectable button that is used to selectively switch to an observer mode, and wherein the method further comprises: responsive to receiving input via the selectable button, switching into the observer mode; providing the sensory information to the user; receiving a second instruction; and preventing the second instruction from being sent to the control system, as taught by Saroha, for multi-level control of the device to improve control and functionality of the device (Saroha [0043]-[0044], [0060]).
Prior also discloses switching into the observer mode; providing the sensory information to the user; receiving a second instruction; and preventing the second instruction from being sent to the control system (Prior [0047]: the computer stops transmitting movement commands from the user interface device to the robotic arm if certain movement limits or other thresholds are exceeded and in essence acts like a virtual clutch mechanism. Hence, preventing second instruction from being sent to the control system; [0058], [0056]: display images on a display).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Sahora, and further incorporate switching into the observer mode; providing the sensory information to the user; receiving a second instruction; and preventing the second instruction from being sent to the control system, as taught by Prior, to avoid undesired movement during control when needed (Saroha [0047]).
Claim 14 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Kobayashi et al. (US 2024/0061632) hereinafter Kobayashi.
Regarding claim 14, Okamoto and Perkins and Jinno disclose all limitations of claim 8.
Okamoto does not explicitly disclose wherein the head-mounted controller comprises a sensor for recording a first viewing angle of the user, and wherein the method further comprises: adjusting a second viewing angle of the camera unit based at least in part on the first viewing angle.
However, Kobayashi discloses wherein the head-mounted controller comprises a sensor for recording a first viewing angle of the user, and wherein the method further comprises: adjusting a second viewing angle of the camera unit based at least in part on the first viewing angle (Kobayashi [0064]-[0065]: head mounted display having head posture detection unit and calculate a viewing angle of the user from output of the head posture detection unit. Changing an angular field of camera so that the viewing angle of the user matches the angular field of the camera).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno, and further incorporate having wherein the head-mounted controller comprises a sensor for recording a first viewing angle of the user, and wherein the method further comprises: adjusting a second viewing angle of the camera unit based at least in part on the first viewing angle, as taught by Kobayashi, for the camera angle of view to follow the angle of view of the user to capture desired view (Kobayashi [0065]).
Claims 15-16 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Saroha et al. (US 2019/0076092) hereinafter Saroha.
Regarding claim 15, Okamoto discloses a method for operating a remotely located robotic unit comprising:
providing the remotely located robotic unit, comprising: a movable camera mount; at least one camera to capture visual information and at least one three-dimensional camera for capturing three-dimensional depth information disposed on the movable camera mount; and at least one robotic arm (Okamoto Fig. 8, [0150], [0148]: robot 50 having cameras 502 attached to joint part 51a at middle of arm 51 attached to main body 53 in a position changeable manner, hence camera with movable camera mount. Pair of cameras 502 mounted on the robot unit to recognize objects; Figs. 2J-2L, [0129]: three-dimensional model of the environment can be generated, [0149]: sensors can be used to determine distance to obstacles);
positioning the remotely located robotic unit proximate to an object; capturing sensory information from the at least one camera and the at least one three-dimensional camera, wherein the sensory information comprises the visual information and the three-dimensional depth information; receiving, at a control system, the sensory information; responsive to receiving the sensory information, creating a three-dimensional representation of the object based at least in part on the three-dimensional depth information (Okamoto Fig. 2A, [0121], [0143], [0145]: moves the robot in the environment and proximate object to perform picking and transporting objects; [0148]: pair of cameras 502 for visual sensor; Figs. 2J-2L, [0129]: three-dimensional model of the environment can be generated);
causing display of the visual information captured from the at least one camera on a display associated with the control system (Okamoto Fig. 2C, [0178], [0180]-[0181]: the operation terminal 403 includes a display device for display operation screen based on data of the image of the environment taken by the camera 502);
receiving, from a first user associated with the control system, an instruction for the remotely located robotic unit to perform an action (Okamoto [0161], [0170], [0177]-[0181], [0199], [0216]: operation terminal 403 is a user interface to receive instruction from the user to operate the robot).
Okamoto broadly disclose at least one depth camera disposed on the camera mount to capture three-dimensional depth information, and receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information as discussed above.
Furthermore, Perkins discloses at least one depth camera disposed on the camera mount to capture three-dimensional depth information, and receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information (Perkins Fig. 1A, [0043]: robot system having sensor 170 mounted on arm 160; [0044]: the sensor 170 comprises camera 172 and depth sensor 172b which data is used to generate three-dimensional depth information).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto, and further incorporate having at least one depth camera disposed on the camera mount to capture three-dimensional depth information, and receive the three-dimensional depth information and create a three-dimensional representation of an object based at least in part on the three-dimensional depth information, as taught by Perkins, to accurately detect objects in the environment (Perkins [0021]).
Okamoto does not explicitly disclose receiving, from a first user associated with the control system, via a head-mounted controller or at least one hand-held controller capturing movement data of the first user, an instruction for the remotely located robotic unit to perform an action; wherein the head-mounted controller controls the movable camera mount and the at least one hand-held controller controls the at least one robotic arm action.
However, Jinno discloses receiving, from a first user associated with the control system, via a head-mounted controller or at least one hand-held controller capturing movement data of the first user, an instruction for the remotely located robotic unit to perform an action; wherein the head-mounted controller controls the movable camera mount and the at least one hand-held controller controls the at least one robotic arm action (Jinno Fig. 1, ]0043]: a robot system having arms 24a, 24b, 24c, wherein the arms can have six degrees of freedom as in [0064], [0068]; [0067]-[0070]: left and right joysticks 38a, 38b to control the arms 24a and 24b).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins, and further incorporate receiving, from a first user associated with the control system, via a head-mounted controller or at least one hand-held controller capturing movement data of the first user, an instruction for the remotely located robotic unit to perform an action; wherein the head-mounted controller controls the movable camera mount and the at least one hand-held controller controls the at least one robotic arm action, as taught by Jinno, to control the arms individually for multiple tasks at a time (Jinno [0067]).
Okamoto does not explicitly disclose responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user.
However, Saroba discloses responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user (Saroha [0043]: control system to control handheld devices or robots; Figs. 1 and 6B, [0044]: control steering device 110 by a steering controller 118, i.e. handheld controller, which can be joystick which receives user inputs, i.e. first user, indication of desired movement of distal portion 104 of the device 102; [0034]: the steering device can also be controlled by a remote user, i.e. second user, in a separate room, or across the world with multi-level actuation controls; Fig. 7, [0057]-[0058], [0061], [0086]-[0089], [0065], [0068], [0077]: when a console override is activated, handheld operation of the steering device 110 is disabled or deactivated and the steering device is controlled by the second user using the console).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno, and further incorporate having responsive to receiving an override instruction from a second user associated with the control system, overriding the instruction from the first user, as taught by Saroha, for multi-level control of the device to improve control and functionality of the device (Saroha [0043]-[0044], [0060]).
Regarding claim 16, Okamoto and Perkins and Jinno and Saroha disclose all limitations of claim 15.
Okamoto does not explicitly disclose further responsive to receiving the override instruction from the second user, disabling further instructions from the first user.
However, Saroha discloses further responsive to receiving the override instruction from the second user, disabling further instructions from the first user (Saroha [0043]: control system to control handheld devices or robots; Figs. 1 and 6B, [0044]: control steering device 110 by a steering controller 118, i.e. handheld controller, which can be joystick which receives user inputs, i.e. first user, indication of desired movement of distal portion 104 of the device 102; [0034]: the steering device can also be controlled by a remote user, i.e. second user, in a separate room, or across the world with multi-level actuation controls; Fig. 7, [0057]-[0058], [0061], [0086]-[0089], [0065], [0068], [0077]: when a console override is activated, handheld operation of the steering device 110 is disabled or deactivated and the steering device is controlled by the second user using the console, hence override the instruction from the first user).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno, and further incorporate having further responsive to receiving the override instruction from the second user, disabling further instructions from the first user, as taught by Saroha, for multi-level control of the device to improve control and functionality of the device (Saroba [0043]-[0044], [0060]).
Claims 17-18 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Saroha et al. (US 2019/0076092) hereinafter Saroha, further in view of Taylor et al. (US 2015/0312468) hereinafter Taylor.
Regarding claim 17, Okamoto and Perkins and Jinno and Saroha disclose all limitations of claim 15.
Okamoto does not explicitly disclose wherein the movable camera mount is a six degree-of-freedom camera mount and further comprising: capturing, by the head-mounted controller, movement data of the first user; and replicating, by the six degree-of-freedom camera mount, the movement data.
However, Jinno discloses the camera mount is configured for moving with six degrees of freedom (Jinno [0061]: a camera mounted on the camera arm which is configured for moving with six degrees of freedom as in [0064]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system, as disclosed by Okamoto and Perkins and Jinno, and further incorporate having the camera mount is configured for moving with six degrees of freedom, as taught by Jinno, to capture images from different angles (Jinno [0064]).
Furthermore, Taylor discloses capturing, by the head-mounted controller, movement data of the first user; and replicating, by the camera mount, the movement data (Taylor Figs. 2-3, [0051]-[0052]: a head mounted display HMD 205 includes a display to display video from camera as in [0064]; [0051]-[0059]: a head mounted display HMD 205 includes a sensor; [0059]: the sensor on the HMD determines the orientations of the HMD including head angle and movement. The HMD can transmit and receive signal including orientation information of the HMD to a controller to control the camera mounting arm based on the position of the HMD so that the camera mount is rotated or tilted to same angle to the angle of the user’s head, hence replicating the movement data. Controlling a remote camera system having first and second cameras 320A and 320B by head rotation based on the tilt angle of the head detected by the sensor. The camera system includes a camera mount arm 305 and 310 as in Fig. 3A, [0053]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method having six-degree of freedom camera mount, as disclosed by Okamoto and Perkins and Jinno and Saroha, and further incorporate having capturing, by the head-mounted controller, movement data of the first user; and replicating, by the camera mount, the movement data, as taught by Taylor, for the user to control the camera position conveniently based on the user’s head position to see desired view (Taylor [0051]-[0053]).
Regarding claim 18, Okamoto and Perkins and Jinno and Saroha disclose all limitations of claim 15.
Okamoto does not explicitly disclose causing displaying of a heads-up display to the first user the heads-up display configured to display at least one of: machine diagnostic information, a timer, a clock, a measured voltage, or a warning.
However, Jinno discloses causing displaying of a display to the first user the display configured to display at least one of: machine diagnostic information, a timer, a clock, a measured voltage, or a warning (Jinno [0081]: displaying an alarm of danger of an interference avoiding motion on the monitor, hence warning).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method having six-degree of freedom camera mount, as disclosed by Okamoto and Perkins and Jinno and Saroha, and further incorporate causing displaying of a display to the first user the display configured to display at least one of: machine diagnostic information, a timer, a clock, a measured voltage, or a warning, as taught by Taylor, to notify user of a situation that need attention (Taylor [0081]).
Okamoto does not explicitly disclose the display is head-up display.
However, Taylor discloses head-up display (Taylor [0051]: a head-up display HMD having display to display images and other various information).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Saroha, and further incorporate having the display is a head-up display, as taught by Taylor, for convenient and remote viewing (Taylor [0051], [0014]).
Claim 19 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), further in view of Saroha et al. (US 2019/0076092) hereinafter Saroha, further in view of Smith et al. (US 2022/0212345) hereinafter Smith.
Regarding claim 19, Okamoto and Perkins and Jinno and Saroha disclose all limitations of claim 15.
Okamoto discloses control of the movable camera mount and the at least one robotic arm.
Jinno discloses the handheld controller as discussed in claim 15 above.
Okamoto does not explicitly disclose wherein the at least one hand-held controller comprises a selectable button to toggle control between the movable camera mount and the at least one robotic arm.
However, Smith discloses controller comprises a selectable button to toggle control between the movable camera mount and the at least one robotic arm (Smith Fig. 1, [0084]-[0086]: first robotic system 104a having first manipulator arm 116a and second robotic system 104b having second manipulator 116b and other devices such as cameras; [0099]: switch inputs operable by the user to selectively switch between modes for control over the first or second robotic system 104a or 104b, hence a selectable button to toggle control between robotic arms or camera mount).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method having the handheld controller, as disclosed by Okamoto and Perkins and Jinno and Saroha, and further incorporate having the handheld controller comprises a selectable button to toggle control between the movable camera mount and the at least one robotic arm, as taught by Smith, for the user to selectively operate each of the robotic arm or the camera mount when desired (Smith [0099]).
Claim 20 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Okamoto et al. (US 2007/0124024) hereinafter Okamoto, in view of Perkins et al. (US 2020/0302207) hereinafter Perkins, in view of Jinno (US 2011/0245844), in view of Saroha et al. (US 2019/0076092) hereinafter Saroha, in view of Smith et al. (US 2022/0212345) hereinafter Smith, in view of Wang et al. (US 2021/0055744) hereinafter Wang, further in view of Taylor et al. (US 2015/0312468) hereinafter Taylor.
Regarding claim 20, Okamoto and Perkins and Jinno and Saroha and Smith disclose all limitations of claim 19.
Okamoto does not explicitly disclose causing display of a heads-up display configured to display which of the movable camera mount and the at least one robotic arm the first user is currently controlling via the heads-up display.
However, Wang discloses causing display of a display configured to display which of the movable camera mount and the at least one robot the first user is currently controlling via the display (Wang Figs. 1-3, [0060]-[0061]: plurality of drones 1-3 each equipped with camera can be controlled by a single controller 10; [0074]-[0082]: icon [drone 2] is brightly displayed on a display section to indicate that the drone 2 is set as the current control target device of the controller 10. Icons for switching a control target drone are also displayed).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method having the movable camera mount and the robotic arm, as disclosed by Okamoto and Perkins and Jinno and Saroha and Smith, and further incorporate causing display of the display configured to display which of the movable camera mount and the at least one robotic arm the first user is currently controlling via the display, as taught by Wang, to notify the user of the device being controlled the by the controller (Wang [0079]).
Okamoto does not explicitly disclose the display is head-up display.
However, Taylor discloses head-up display (Taylor [0051]: a head-up display HMD having display to display images and other various information).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Okamoto and Perkins and Jinno and Saroha and Wang, and further incorporate having the display is a head-up display, as taught by Taylor, for convenient and remote viewing (Taylor [0051], [0014]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN V NGUYEN whose telephone number is (571)270-0626.
The examiner can normally be reached on M-F 9:00am-6:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHLEEN V NGUYEN/Primary Examiner, Art Unit 2486