DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Pursuant to communications filed on 17 December 2024, this is a First Action Non-Final Rejection on the Merits. Claims 1-20 are currently pending in the instant application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 14 March 2025 and 01 July 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the Examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-17 and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hasegawa et al (US 2009/0312867 A1, hereinafter Hasegawa).
Regarding claim 1, Hasegawa teaches a method comprising:
obtaining, by data processing hardware (Figures 1-3, control unit 60; at least paragraphs 0035, 0039 and 0041-0043) of a legged robot (Figure 1, robot 1), first sensor data associated with an environment of the legged robot (Figure 1-5; at least as in paragraphs 0049-0051, specifically “where the robot 1 carries out an operation of pushing an object 120 while walking with the hands 44R, 44L of both arms 5, 5 engaged with predetermined portions of the object 120 (a carriage in the illustrated example) as an example, as shown in, for instance, FIG. 4. A force received by the robot 1 from the object 120 is an object reaction force.”);
identifying, by the data processing hardware, an object in the environment based on the first sensor data (Figure 1-5; at least as in paragraphs 0049-0051, specifically wherein object 120 is identified relative/proximate to said robot 1 at least through an object reaction force);
instructing, by the data processing hardware, a first movement of a leg (Figure 1, leg(s) 2; at least as in paragraphs 0034-0036) of the legged robot to a location within the environment associated with the object, wherein a body (Figure 1, body 3; at least as in paragraphs 0034-0038) of the legged robot experiences a first force based on the first movement of the leg (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on a floor reaction force experienced by the legs (i.e. feet) of the robot) Examiner notes wherein the claimed first force correlates to a floor reaction force experienced by the body from a first one of the two legs (i.e. feet) of the robot.;
instructing, by the data processing hardware, an interaction of a movable end-effector (Figure 1, hands 44R, 44L; at least as in paragraphs 0037, 0049) of the legged robot with the object based on identifying the object and instructing the first movement of the leg, wherein the body experiences a second force based on the interaction of the movable end-effector with the object (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on an object reaction force experienced by the arms (i.e. hands) of the robot) Examiner notes wherein the claimed second force correlates to an object reaction force experienced by the body from a first one of the two arms (i.e. hands) of the robot.; and
balancing, by the data processing hardware, the body based on the first force and the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a resultant force is determined, such that a ZMP of the robot is corrected and/or maintained, such that balance of the robot is achieved).
Regarding claim 2, Hasegawa further teaches wherein instructing the first movement of the leg comprises: providing a first set of instructions to the legged robot, and wherein instructing the interaction of the movable end-effector with the object comprises: providing a second set of instructions to the legged robot, the method further comprising: moving the leg based on the first set of instructions; and interacting the movable end-effector with the object based on the second set of instructions (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 3, Hasegawa further teaches wherein balancing the body comprises: instructing a second movement of the leg to balance the body based on the first force and the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259,).
Regarding claim 4, Hasegawa further teaches wherein balancing the body comprises: instructing a second movement of the body to balance the body based on the first force and the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259,).
Regarding claim 5, Hasegawa further teaches wherein balancing the body comprises: instructing adjustment of a center of mass of the legged robot to balance the body based on the first force and the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 6, Hasegawa teaches the method further comprising: determining a third force to balance to the second force, wherein balancing the body comprises: balancing the body further based on the third force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 7, Hasegawa teaches the method further comprising: determining a third force to balance the body, wherein one or more of: instructing the first movement of the leg is based on determining the third force; or instructing the interaction of the movable end-effector with the object is based on determining the third force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 8, Hasegawa further teaches wherein the interaction of the movable end-effector with the object comprises one or more of: gripping the object; moving the object; turning the object; carrying the object; pushing the object; or pulling the object (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 9, Hasegawa further teaches wherein instructing the interaction of the movable end-effector with the object comprises: instructing the legged robot with the leg located at the location to extend the movable end-effector to a position within the environment associated with the object (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 10, Hasegawa further teaches wherein the movable end-effector experiences a third force based on the interaction of the movable end-effector with the object, and wherein the third force is different from the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 11, Hasegawa further teaches wherein the leg experiences a third force based on the first movement of the leg, wherein the third force is different from the first force, wherein the movable end-effector experiences a fourth force based on the interaction of the movable end-effector with the object, and wherein the fourth force is different from the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 12, Hasegawa teaches the method further comprising: obtaining second sensor data; and identifying one or more of the first force or the second force based on the first sensor data (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 13, Hasegawa teaches the method further comprising: positioning the body according to one or more first degrees of freedom based on the first movement of the leg; and positioning the body according to one or more second degrees of freedom based on the interaction of the movable end-effector with the object (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 14, Hasegawa further teaches wherein instructing the first movement of the leg comprises: instructing a foot of the leg to contact a surface at the location (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 15, Hasegawa further teaches wherein the first sensor data is based on movement by the legged robot within the environment according to a gait (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259).
Regarding claim 16, Hasegawa teaches a system comprising:
data processing hardware (Figures 1-3, control unit 60; at least paragraphs 0035, 0039 and 0041-0043); and
memory (Figures 1-3, RAM 84, ROM 94) in communication with the data processing hardware (Figures 1-3; at least as in paragraphs 0041-0043), the memory storing instructions, wherein execution of the instructions by the data processing hardware causes the data processing hardware to:
obtain first sensor data associated with an environment of a legged robot (Figure 1-5; at least as in paragraphs 0049-0051, specifically “where the robot 1 carries out an operation of pushing an object 120 while walking with the hands 44R, 44L of both arms 5, 5 engaged with predetermined portions of the object 120 (a carriage in the illustrated example) as an example, as shown in, for instance, FIG. 4. A force received by the robot 1 from the object 120 is an object reaction force.”);
identify an object in the environment based on the first sensor data (Figure 1-5; at least as in paragraphs 0049-0051, specifically wherein object 120 is identified relative/proximate to said robot 1 at least through an object reaction force);
instruct a first movement of a leg of the legged robot to a location within the environment associated with the object, wherein a body of the legged robot experiences a first force based on the first movement of the leg (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on a floor reaction force experienced by the legs (i.e. feet) of the robot) Examiner notes wherein the claimed first force correlates to a floor reaction force experienced by the body from a first one of the two legs (i.e. feet) of the robot.;
instruct an interaction of a movable end-effector of the legged robot with the object based on identifying the object and instructing the first movement of the leg, wherein the body experiences a second force based on the interaction of the movable end-effector with the object (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on an object reaction force experienced by the arms (i.e. hands) of the robot) Examiner notes wherein the claimed second force correlates to an object reaction force experienced by the body from a first one of the two arms (i.e. hands) of the robot.; and
balance the body based on the first force and the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a resultant force is determined, such that a ZMP of the robot is corrected and/or maintained, such that balance of the robot is achieved).
Regarding claim 17, Hasegawa further teaches the system further comprising: a first control system (Figure 3, leg main controller 104); and a second control system (Figure 3, arm main controller 106), wherein the first control system and the second control system are implemented with the data processing hardware, and wherein the first control system is configured to operate the leg and the second control system is configured to operate the movable end-effector (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically as shown in at least Figure 3, and as disclosed in at least paragraphs 0258-0259).
Regarding claim 19, Hasegawa teaches a robot (Figure 1, robot 1) comprising:
a body (Figure 1, body 3; at least as in paragraphs 0034-0038);
at least two legs coupled to the body (Figure 1, leg(s) 2; at least as in paragraphs 0034-0036);
a movable end-effector coupled to the body (Figure 1, hands 44R, 44L; at least as in paragraphs 0037, 0049);
data processing hardware (Figures 1-3, control unit 60; at least paragraphs 0035, 0039 and 0041-0043); and
memory (Figures 1-3, RAM 84, ROM 94) in communication with the data processing hardware (Figures 1-3; at least as in paragraphs 0041-0043), the memory storing instructions, wherein execution of the instructions by the data processing hardware causes the data processing hardware to:
obtain first sensor data associated with an environment of the robot (Figure 1-5; at least as in paragraphs 0049-0051, specifically “where the robot 1 carries out an operation of pushing an object 120 while walking with the hands 44R, 44L of both arms 5, 5 engaged with predetermined portions of the object 120 (a carriage in the illustrated example) as an example, as shown in, for instance, FIG. 4. A force received by the robot 1 from the object 120 is an object reaction force.”);
identify an object in the environment based on the first sensor data (Figure 1-5; at least as in paragraphs 0049-0051, specifically wherein object 120 is identified relative/proximate to said robot 1 at least through an object reaction force);
instruct a first movement of a leg of the at least two legs to a location within the environment associated with the object, wherein the body experiences a first force based on the first movement of the leg (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on a floor reaction force experienced by the legs (i.e. feet) of the robot) Examiner notes wherein the claimed first force correlates to a floor reaction force experienced by the body from a first one of the two legs (i.e. feet) of the robot.;
instruct an interaction of the movable end-effector with the object based on identifying the object and instructing the first movement of the leg, wherein the body experiences a second force based on the interaction of the movable end-effector with the object (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on an object reaction force experienced by the arms (i.e. hands) of the robot) Examiner notes wherein the claimed second force correlates to an object reaction force experienced by the body from a first one of the two arms (i.e. hands) of the robot.; and
balance the body based on the first force and the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a resultant force is determined, such that a ZMP of the robot is corrected and/or maintained, such that balance of the robot is achieved).
Regarding claim 20, Hasegawa further teaches wherein the leg experiences a third force based on the first movement of the leg, wherein the third force is different from the first force, wherein the movable end-effector experiences a fourth force based on the interaction of the movable end-effector with the object, and wherein the fourth force is different from the second force (Figures 1-5; at least as in paragraphs 0049-0051, 0075-0081 and 0257-0259, specifically wherein a desired ZMP is controlled/maintained, based on one or more of object reaction force experienced by the arms (i.e. hands) of the robot, floor reaction force experienced by the legs (i.e. feet) of the robot). Examiner notes wherein the claimed third force correlates to a floor reaction force experienced by at least one of the two legs (i.e. feet) of the robot as it moves/traverses with the object within the environment, and further wherein the claimed fourth force correlates to an object reaction force experienced by at least one of the two arms (i.e. hands) of the robot as it moves/pushes the object within the environment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hasegawa et al (US 2009/0312867 A1, hereinafter Hasegawa) in view of Gillett (US 10,625,593 B2).
The teachings of Hasegawa have been discussed above.
Regarding claim 18, Hasegawa is silent specifically regarding wherein the first sensor data comprises vision sensor data. Gillett, in the same field of endeavor, teaches a self-balancing robot system that includes an array of sensors and cameras to scan surrounding environments, and further wherein said image data is utilized to balance said robot (Figures 8 & 9; at least as in column 15, line 58-column 16, line 24, column 17, lines 11-35 and column 20, lines 1-51). Therefore, it would have been obvious to one of ordinary skill in the art at the effective filing date of the instant invention, to modify the teachings of Hasegawa, to include Gillett’s teachings of utilizing image data to maintain balance of a robot, since Gillett teaches wherein utilizing imaging data enhances/improves the balancing of the robot, as the robot traverses through an environment, thereby providing a more robust and dynamic robotic system for transporting one or more objects within said environment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892 – Notice of References Cited form. Examiner additionally notes the following prior art references, in the same field of endeavor as the instant invention, which also appears to teach several of the currently provided claim limitations above;
US 2011/0040407 A1, issued to Lim et al, which is directed towards an apparatus and method for stabilizing humanoid robot, and specifically related to lifting and holding a heavy object while maintaining balance of said humanoid robot.
US 2007/0156283 A1, issued to Takenaka, which is directed towards a gait generator for a bipedal robot.
US 2003/0173926 A1, issued to Hattori et al, which is directed towards a humanoid robot including upper limbs, lower limbs and a trunk (i.e. body), which is controlled to move such that balance of the robot is maintained.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN L SAMPLE whose telephone number is (571)270-5925. The examiner can normally be reached Monday-Friday 7:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571)270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN L SAMPLE/Primary Examiner, Art Unit 3657