DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The applicant filed an IDS on 4/25/2023. Each has been annotated and considered.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. The claims of the instant application are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of US Patent 11597078. Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of the claims in the instant application are encompassed by the claims of US Patent 11597078 as mapped below:
Instant Application 18116118
US Patent 11597078
1, 8 and 15
1, 7
2
1, 9
4
1, 7
7
14, 21
12
18
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1 (and similarly 8 and 15), the Applicant claims “a robot to grasp an object”. However, it is not clear if the robot is grasping any object or an object specifically related to the hand. The Specification and drawings teach the robot grabbing an object from a hand, so the Applicant most likely means a robot to grasp an object from a hand.
Regarding claim 11, the term “the interference” is unclear because it lacks antecedent basis.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3, 5-12 and 14-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hinkle (US 10471591 hereinafter Hinkle).
Regarding claim 1 (and similarly 8 and 15), Hinkle teaches a processor comprising one or more circuits to:
generate a plurality of grasp poses that allow a robot to grasp an object (See at least: Figs. 9A-9E; Col. 25 lines 37-50 via “FIG. 9B illustrates actor 700 initiating hand-over of cup 702 by moving cup 702 from the position illustrated in FIG. 9A, indicated by the arm in dashed lines, to a position closer to gripper 710, as indicated by line 914. Moving cup 702 close to gripper 710 decreases the distance between the palm of gripper 710 and cup 702 (i.e., brings positions 904 and 906 closer together), thereby moving position 906 to the left of second threshold position 902. The decreased distance between the palm and cup 702 may be determined by robotic device 701 by repeatedly scanning cup 702 using the depth sensor disposed within the palm, as indicated by field of view 912. With the distance between the palm and cup 702 now being less than the second threshold distance, arm 708 may proceed to move gripper 710 closer to cup 702.”); and select, based at least in part on a pose of a hand, a target grasp pose from the plurality of grasp poses that does not interfere with the hand (See at least: Fig. 9D-9E).
Regarding claim 3, Hinkle teaches wherein the one or more circuits are to: use one or more images to perform classification of the hand grasping the object and to determine a position of the hand; and generate the plurality of grasp poses based, at least in part, on the classification and the position of the hand (See at least: Figs. 9A-9E; Note: Hinkle’s robot distinguishes between the cup and the hand, so teaches classification according to broadest reasonable interpretation.).
Regarding claim 5, Hinkle teaches wherein the one or more circuits are to: generate a motion plan that comprises information to cause the robot to avoid contact between the robot and the hand; and use the motion plan and the target grasp pose to grasp the object (See at least: Figs. 9A-9E).
Regarding claim 6, Hinkle teaches wherein the one or more circuits are to: use the selected target grasp pose to grasp the object; and generate a motion plan to cause the robot to drop the object, wherein the motion plan includes information to cause the robot to move to a position to drop the object that avoids a collision (See at least: Figs. 12A-12C).
Regarding claim 7, Hinkle teaches wherein the one or more circuits are to use one or more neural networks to generate the plurality of grasp poses, wherein the one or more neural networks are trained by one or more images comprising one or more hands gripping the object (See at least: Col. 23 lines 6-11 via “Robotic device 701 may, for example, use computer vision, machine learning, or artificial intelligence algorithms (e.g., artificial neural networks) to identify and classify objects within the sensor data. Robotic device 701 may thus identify cup 702 and the shoes worn by actor 700 as candidate objects for hand-over from actor 700 to robotic device 701.”; Col. 34 lines 6-16 via “Additionally, in some implementations, robotic device 701 may also detect hand 1200 underneath cup 702 by determining that the shape of the grayscale pixel pattern in area 1310 matches at least one of a predetermined number of patterns corresponding to a hand. Further, in some implementations, the task or recognizing hand 1200 in image 1304 or 1306 may be performed by one or more machine learning algorithms. For example, hand 1200 may be detected by an artificial neural network trained to detect hands using images captured from the perspective of the palm of gripper 710.”).
Regarding claim 9, Hinkle teaches wherein the one or more processors are to: use one or more images to generate a human grasp data set; and train one or more neural networks using the generated human grasp data set to estimate the pose of the hand (See at least: Col. 23 lines 6-11 via “Robotic device 701 may, for example, use computer vision, machine learning, or artificial intelligence algorithms (e.g., artificial neural networks) to identify and classify objects within the sensor data. Robotic device 701 may thus identify cup 702 and the shoes worn by actor 700 as candidate objects for hand-over from actor 700 to robotic device 701.”; Col. 34 lines 6-16 via “Additionally, in some implementations, robotic device 701 may also detect hand 1200 underneath cup 702 by determining that the shape of the grayscale pixel pattern in area 1310 matches at least one of a predetermined number of patterns corresponding to a hand. Further, in some implementations, the task or recognizing hand 1200 in image 1304 or 1306 may be performed by one or more machine learning algorithms. For example, hand 1200 may be detected by an artificial neural network trained to detect hands using images captured from the perspective of the palm of gripper 710.”).
Regarding claim 10, Hinkle teaches wherein the one or more processors are to: use one or more neural networks to classify the pose of the hand grasping the object; and generate a plan for the robot to grasp the object from the hand based, at least in part, on the classification (See at least: Col. 23 lines 6-11 via “Robotic device 701 may, for example, use computer vision, machine learning, or artificial intelligence algorithms (e.g., artificial neural networks) to identify and classify objects within the sensor data. Robotic device 701 may thus identify cup 702 and the shoes worn by actor 700 as candidate objects for hand-over from actor 700 to robotic device 701.”; Col. 34 lines 6-16 via “Additionally, in some implementations, robotic device 701 may also detect hand 1200 underneath cup 702 by determining that the shape of the grayscale pixel pattern in area 1310 matches at least one of a predetermined number of patterns corresponding to a hand. Further, in some implementations, the task or recognizing hand 1200 in image 1304 or 1306 may be performed by one or more machine learning algorithms. For example, hand 1200 may be detected by an artificial neural network trained to detect hands using images captured from the perspective of the palm of gripper 710.”).
Regarding claim 11, Hinkle teaches wherein the interference comprises the robot being in contact with the hand (See at least: 9A-9E and 12A-12C)
Regarding claim 12, Hinkle teaches wherein the one or more processors are to adjust the target grasp pose based, at least in part, on a motion of the hand (See at least: Figs. 9A-9E; Col. 25 lines 37-50 via “FIG. 9B illustrates actor 700 initiating hand-over of cup 702 by moving cup 702 from the position illustrated in FIG. 9A, indicated by the arm in dashed lines, to a position closer to gripper 710, as indicated by line 914. Moving cup 702 close to gripper 710 decreases the distance between the palm of gripper 710 and cup 702 (i.e., brings positions 904 and 906 closer together), thereby moving position 906 to the left of second threshold position 902. The decreased distance between the palm and cup 702 may be determined by robotic device 701 by repeatedly scanning cup 702 using the depth sensor disposed within the palm, as indicated by field of view 912. With the distance between the palm and cup 702 now being less than the second threshold distance, arm 708 may proceed to move gripper 710 closer to cup 702.”); and select, based at least in part on a pose of a hand, a target grasp pose from the plurality of grasp poses that does not interfere with the hand (See at least: Fig. 9D-9E).
Regarding claim 14, Hinkle teaches wherein the one or more processors are to move the robot to a position to avoid colliding with the hand if a distance between the robot and hand exceeds a threshold (See at least: Col. 8 lines 49-67 via “Additionally, in some implementations, the speed with which the robot's arm advances towards the object may be dependent on the distance between the object and the palm of the gripper. When the gripper is far away from the object, the gripper may move quickly, but may gradually slow down as it approaches the object. Further, the speed trajectory with which the gripper moves towards the object may depend on the second threshold distance, which may be modifiable based on actor preferences. For example, when the second threshold distance is large, the gripper may initially move with a high speed to quickly traverse the larger initial distance between the palm and the actor. The speed of the gripper may nevertheless decrease as the gripper approaches the object. When the second threshold distance is small, the gripper may initially move with a lower speed since it already close to the object. The speed trajectory may be configurable based on actor preferences to generate movement that is not so slow so as to annoy actors and not so fast so as to startle actors.”; Col. 9 lines 1-8 via “When the palm of the gripper moves to within the first threshold distance, the gripper may close around the object to grasp the object. The gripper may be positioned along the object to avoid contact with the actor's hand or fingers. Additionally, in some implementations, the gripper may be designed to come apart when the actor impacts the gripper with sufficient force along certain directions to thereby prevent any injury to the actor.”).
Regarding claim 16, Hinkle teaches wherein the instructions, if performed by one or more processors, cause the one or more processors to adjust the motion of the robot to avoid colliding with the hand after grasping the object from the hand (See at least: Figs: 9A-9E and 12A-12 Note: After grasping the cup from the person, the robot can place the cup back in the person’s hand without a collision.)
Regarding claim 17, Hinkle teaches wherein the object is being held in an open palm of the hand or held by two or more fingers of the hand (See at least: Figs. 9A-9E).
Regarding claim 18, Hinkle teaches wherein the instructions, if performed by one or more processors, cause the one or more processors to generate the plurality of grasp poses by one or more neural networks, wherein the one or more neural networks are trained by one or more segmented images comprising the hand gripping the object (See at least: Col. 23 lines 6-11 via “Robotic device 701 may, for example, use computer vision, machine learning, or artificial intelligence algorithms (e.g., artificial neural networks) to identify and classify objects within the sensor data. Robotic device 701 may thus identify cup 702 and the shoes worn by actor 700 as candidate objects for hand-over from actor 700 to robotic device 701.”; Col. 34 lines 6-16 via “Additionally, in some implementations, robotic device 701 may also detect hand 1200 underneath cup 702 by determining that the shape of the grayscale pixel pattern in area 1310 matches at least one of a predetermined number of patterns corresponding to a hand. Further, in some implementations, the task or recognizing hand 1200 in image 1304 or 1306 may be performed by one or more machine learning algorithms. For example, hand 1200 may be detected by an artificial neural network trained to detect hands using images captured from the perspective of the palm of gripper 710.”).
Regarding claim 19, Hinkle teaches wherein the instructions, if performed by one or more processors, cause the one or more processors to:calculate one or more distances between the robot and the hand; and use the calculated one or more distances to cause the robot to move to a position to avoid a collision with the hand (See at least: Col. 8 lines 49-67 via “Additionally, in some implementations, the speed with which the robot's arm advances towards the object may be dependent on the distance between the object and the palm of the gripper. When the gripper is far away from the object, the gripper may move quickly, but may gradually slow down as it approaches the object. Further, the speed trajectory with which the gripper moves towards the object may depend on the second threshold distance, which may be modifiable based on actor preferences. For example, when the second threshold distance is large, the gripper may initially move with a high speed to quickly traverse the larger initial distance between the palm and the actor. The speed of the gripper may nevertheless decrease as the gripper approaches the object. When the second threshold distance is small, the gripper may initially move with a lower speed since it already close to the object. The speed trajectory may be configurable based on actor preferences to generate movement that is not so slow so as to annoy actors and not so fast so as to startle actors.”; Col. 9 lines 1-8 via “When the palm of the gripper moves to within the first threshold distance, the gripper may close around the object to grasp the object. The gripper may be positioned along the object to avoid contact with the actor's hand or fingers. Additionally, in some implementations, the gripper may be designed to come apart when the actor impacts the gripper with sufficient force along certain directions to thereby prevent any injury to the actor.”).
Regarding claim 20, Hinkle teaches wherein the instructions, if performed by one or more processors, cause the one or more processors to cause the robot to use the selected target grasp pose to grasp the object from an appendage, wherein the appendage comprises a body part of a human, an animal, or another robot (See at least: Fig. 9A-9E).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Hinkle in view of Handa et al. (US 20210122045 hereinafter Handa).
Regarding claim 2, Hinkle fails to teach the following limitation but Handa teaches wherein the one or more circuits are to: obtain an image comprising the hand gripping the object; segment the image to identify a portion of the image that represents the pose of the hand and a second portion of the image that represents a pose of the objectestimation. Tactile perception can identify object properties such as materials and pose, as well as provide feedback during object manipulation.).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hinkle in view of Handa to teach wherein the one or more circuits are to: obtain an image comprising the hand gripping the object; segment the image to identify a portion of the image that represents the pose of the hand and a second portion of the image that represents a pose of the object; and generate the plurality of grasp poses based, at least in part, on the segmented image so that the robot can more accurately define the hand and object to successfully grab the object without colliding with the hand.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Hinkle in view of Shafer (US Patent 11331799 hereinafter Shafer).
Regarding claim 4, Hinkle fails to teach the following limitation but Shafer teaches wherein the one or more circuits are to: use one or more images to identify the pose of initial grasp pose can be selected, or one can be randomly (truly random or pseudo-random) selected. Also, for instance the one with the best predicted grasp success measure can be selected. A predicted grasp success measure for each candidate grasp pose can be generated based on processing the candidate grasp pose, and a corresponding instance of end effector vision data (or visual features determined based thereon), using a machine learning model trained as described herein. As yet another example, multiple candidate grasp poses can be determined, and the final grasp pose determined as a function of the multiple grasp poses.”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hinkle in view of Schafer to teach wherein the one or more circuits are to: use one or more images to identify the pose of the hand gripping the object; generate the plurality of grasp poses based, at least in part, on the pose of the hand gripping the object; use the selected target grasp pose from the plurality of grasp poses to adjust a position of the robot to grasp the object; and cause the robot to use the target grasp pose that does not interfere with the hand to grasp the object so that the best available grasp pose to grasp the object without interfering with the hand can be used to increase the chance of a successful grasp.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Hinkle in view of Furuya (US Publication 20130218324 hereinafter Furuya).
Regarding claim 13, Hinkle fails to teach the following limitation, but Furuya teaches wherein the one or more processors are to use the target grasp pose to cause the robot to grasp the object from a hand of another robot (See at least: [0033] In the next step S23, an image of first article 30 is captured by second camera 40 so that the position and attitude of first article 30 are detected. Then, second robot hand 32 approaches first article 30 to be assembled, the hand is opened if robot hand 32 is a gripping-type hand (step S24), and then first article 30 is taken out by second robot hand 32 (step S25).).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hinkle in view of Furuya to teach wherein the one or more processors are to use the target grasp pose to cause the robot to grasp the object from a hand of another robot so that the robot can be used to identify robot hands and grasp objects from them in addition to human hands to be scalable in completing more tasks.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Harry Oh whose telephone number is (571)270-5912. The examiner can normally be reached on Monday-Thursday, 9:00-3:00.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HARRY Y OH/Primary Examiner, Art Unit 3657