DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 as originally filed are pending and have been considered as follows.
Priority
1. This application claims priority from U.S. Provisional Application No. 63/533,378 filed on 08/18/2023 is acknowledge.
Information Disclosure Statement
2. The information disclosure statement (IDS) filed on 12/16/2024 is being considered by the examiner.
Claim Interpretation
3. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
4. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
5. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“user interaction device” in claims 1, 3, 4, 15, 16, and 17.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
A review of the specification shows the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitations:
“user interaction device” in claims 1, 3, 4, 15, 16, and 17 corresponds to “user interaction device presents visual and audio feedback to the user and accepts user input and feedback for human-in-the-loop grasp synthesis and execution. User input may include input from a touch display, augmented reality glasses, body-worn sensor networks (for example, and not by way of limitation, augmented reality or virtual reality gloves), speakers/microphones, body-worn cameras, computer mouse and keyboard, joysticks, etc” [0017] and Fig. 1.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
6. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
7. Claims 8-14 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 8, it is unclear what is meant by the terms “high-level”. It is unclear what constitute as a high-level. For examination purposes, the examiner is interpreting “high-level path planner” as any path planner.
In the art rejection above, the claims have been treated as best understood by the examiner. Any claim not explicitly rejected under this heading is rejected as being dependent on an indefinite claim.
Claim Rejections - 35 USC § 102
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
9. Claims 1-3, and 6-7 is/are rejected under 35 U.S.C. 102(a)(2)/(a)(1) as being anticipated by Barry et al. (US 20220193894, hereinafter Barry).
Regarding claim 1, Barry teaches a system for grasp synthesis of non-occluded and occluded objects with a camera-equipped robot manipulator (see at least Fig. 1A and abstract: “The method includes receiving a selection input indicating a user-selection of a target object represented in an image corresponding to the space. The target object is for grasping by an end-effector of a robotic manipulator of the robot.”; [0024]: “The sensors 132 may include vision/image sensors, inertial sensors (e.g., an inertial measurement unit (IMU)), force sensors, and/or kinematic sensors. Some examples of sensors 132 include a camera such as a stereo camera, a time-of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.”), said system comprising:
a robot manipulator with a gripper and gripper camera (see at least Fig. 1A and [0024]: “In order to maneuver about the environment 10 or to perform tasks using the arm 126, the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b). The sensors 132 may include vision/image sensors, inertial sensors (e.g., an inertial measurement unit (IMU)), force sensors, and/or kinematic sensors. Some examples of sensors 132 include a camera such as a stereo camera, a time-of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.”), wherein said robot manipulator is configured to execute a grasp of a target object (see at least [0022]: “In some examples, such as FIG. 1A, the hand member 128.sub.H or end-effector 150 is a mechanical gripper that includes a moveable jaw and a fixed jaw configured to perform different types of grasping of elements within the environment 10.”; [0035]: “In some implementations, the target object selected by the user corresponds to a respective object for an end-effector 150 of a robotic manipulator of the robot 100 to grasp.”);
a user interaction device configured to present visual and audio feedback to a user and accept user input and feedback (see at least Fig. 1B and [0033]: “A user 12 may interact with the robot 100 via the remote controller 20 that communicates with the robot 100 to perform actions. Additionally, the robot 100 may communicate with the remote controller 20 to display an image on a user interface 300 (e.g., UI 300) of the remote controller 20.”; [0034]: “The image displayed on the UI 300 may include one or more objects that are present in the environment 10 (e.g., within a field of view F.sub.V for a sensor 132 of the robot 100)…The UI 300 allows the user 12 to select an object displayed in the two-dimensional image as a target object in order to instruct the robot 100 to perform an action upon the selected target object in the three-dimensional environment 10.”; [0067]: “To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.”);
at least one processor in communication with said robot manipulator and said user interaction device (see at least Figs. 1B, 4, and [0030]: “In some implementations, as shown in FIGS. 1A and 1B, the robot 100 includes a control system 170. The control system 170 may be configured to communicate with systems of the robot 100, such as the at least one sensor system 130. The control system 170 may perform operations and other functions using hardware 140.”; [0033]: “A user 12 may interact with the robot 100 via the remote controller 20 that communicates with the robot 100 to perform actions.”; [0059]: “The computing device 400 includes a processor 410 (e.g., data processing hardware), memory 420 (e.g., memory hardware), a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430.”); and
at least one memory in communication with said at least one processor, configured to receive and store data from said robot manipulator and said user interaction device (see at least Figs. 1B, 4, and [0059]: “The computing device 400 includes a processor 410 (e.g., data processing hardware), memory 420 (e.g., memory hardware), a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430….The processor 410 can process instructions for execution within the computing device 400, including instructions stored in the memory 420 or on the storage device 430 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 480 coupled to high speed interface 440.”; [0060]: “The memory 420 stores information non-transitorily within the computing device 400. The memory 420 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 420 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 400.”).
Regarding claim 2, Barry teaches the limitations of claim 1. Barry further teaches wherein said robot manipulator is a robotic arm comprising a plurality of arm base actuators, an arm first link, an arm second link connected to said arm first link via an elbow joint, wrist actuators (see at least Fig. 1A and [0022]: “To illustrate an example, FIG. 1A depicts the arm 126 with three members 128 corresponding to a lower member 128.sub.L, an upper member 128.sub.U, and a hand member 128.sub.H (e.g., shown as an end-effector 150). Here, the lower member 128.sub.L may rotate or pivot about a first arm joint J.sub.A1 located adjacent to the body 110 (e.g., where the arm 126 connects to the body 110 of the robot 100). The lower member 128.sub.L is coupled to the upper member 128.sub.U at a second arm joint J.sub.A2 and the upper member 128.sub.U is coupled to the hand member 128.sub.H at a third arm joint J.sub.A3. In some examples, such as FIG. 1A, the hand member 128.sub.H or end-effector 150 is a mechanical gripper that includes a moveable jaw and a fixed jaw configured to perform different types of grasping of elements within the environment 10….In other words, the fourth joint J.sub.A4 may function as a twist joint similarly to the third joint J.sub.A3 or wrist joint of the arm 126 adjacent the hand member 128.sub.H. For instance, as a twist joint, one member coupled at the joint J may move or rotate relative to another member coupled at the joint J (e.g., a first member coupled at the twist joint is fixed while the second member coupled at the twist joint rotates). In some implementations, the arm 126 connects to the robot 100 at a socket on the body 110 of the robot 100.”), and at least one gripper including gripper jaws, a gripper camera (see at least Fig. 1A and [0022]: “In some examples, such as FIG. 1A, the hand member 128.sub.H or end-effector 150 is a mechanical gripper that includes a moveable jaw and a fixed jaw configured to perform different types of grasping of elements within the environment 10. The moveable jaw is configured to move relative to the fixed jaw in order to move between an open position for the gripper and a closed position for the gripper (e.g., closed around an object).”; [0024]: “In order to maneuver about the environment 10 or to perform tasks using the arm 126, the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b). The sensors 132 may include vision/image sensors, inertial sensors (e.g., an inertial measurement unit (IMU)), force sensors, and/or kinematic sensors. Some examples of sensors 132 include a camera such as a stereo camera, a time-of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.”).
Regarding claim 3, Barry teaches the limitations of claim 1. Barry further teaches wherein said user interaction device comprises a touch display (see at least Fig. 1B and [0034]: “The UI 300 allows the user 12 to select an object displayed in the two-dimensional image as a target object in order to instruct the robot 100 to perform an action upon the selected target object in the three-dimensional environment 10.”; [0053]: “The grasp geometry generator 210 receives the user-selected target object from the UI 300 and sensor data 134 (e.g., three-dimensional point cloud). The user 12 selects the target object on the two-dimensional image on the UI 300 that corresponds to the three-dimensional point cloud of data 134 for the field of view F.sub.V of the robot 100.”; [0067]: “To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.”).
Regarding claim 6, Barry teaches the limitations of claim 1. Barry further teaches wherein the robot manipulator is mounted on a mobile base (see at least Fig. 1A and [0020]: “Referring to FIGS. 1A and 1B, the robot 100 includes a body 110 with locomotion based structures such as legs 120a-d coupled to the body 110 that enable the robot 100 to move about the environment 10. In some examples, each leg 120 is an articulable structure such that one or more joints J permit members 122 of the leg 120 to move.”).
Regarding claim 7, Barry teaches the limitations of claim 6. Barry further teaches wherein said mobile base is a legged robot (see at least Fig. 1A and [0020]: “Referring to FIGS. 1A and 1B, the robot 100 includes a body 110 with locomotion based structures such as legs 120a-d coupled to the body 110 that enable the robot 100 to move about the environment 10. In some examples, each leg 120 is an articulable structure such that one or more joints J permit members 122 of the leg 120 to move.”).
Claim Rejections - 35 USC § 103
10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
11. Claim 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barry et al. (US 20220193894, hereinafter Barry) in view of Hoffman et al. (US 20170217021, hereinafter Hoffman).
Regarding claim 4, Barry teaches the limitations of claim 1.
Barry fails to explicitly teach wherein said user interaction device comprises at least one joystick.
However, Hoffman teaches a method and apparatus for remotely operating a mobile robot with a user interaction device that comprises at least one joystick (see at least Figs. 1, 4A, and [0076]: “Referring to FIGS. 4A-4M, the OCU 100 may include a display (e.g., LCD or touch screen) 110, a keyboard, and one or more auxiliary user inputs, such as a joystick or gaming unit in communication with the computing device 102. As shown, the OCU 100 is a touch screen tablet. The OCU 100 provides a user interface of the teleoperation software application 101 that is rendered on the display 110 of the OCU 100 and allows an operator or user 10 to control the robot 200 from a distance.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Barry to incorporate the teachings of Hoffman and provide a user interaction device that comprises at least one joystick, with a reasonable expectation of success, in order to provide an alternative means of controlling the robot.
12. Claims 5, 15, 16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barry et al. (US 20220193894, hereinafter Barry) in view of Ku et al. (US 20220016767, hereinafter Ku).
Regarding claim 5, Barry teaches the limitations of claim 1. Barry further teaches wherein said robot manipulator is further configured to clear a plurality of objects (see at least [0022]: “In the examples shown, the robot 100 includes an arm 126 that functions as a robotic manipulator. The arm 126 may be configured to move about multiple degrees of freedom in order to engage elements of the environment 10 (e.g., objects within the environment 10).”).
Barry fails to explicitly teach clearing a plurality of objects occluding said target object.
However, Ku teaches a method and system for object grasping with a robot that comprises a robot manipulator to clear a plurality of objects occluding a target object (see at least Figs. 2, 8, and [0025]: “First, variants of the system and method enable object grasping from a bin of objects, wherein the objects can be overlapping with other objects and in any random pose. Such variants can improve the grasp success rate for highly occluded objects and/or object scenes (e.g., an example is shown in FIG. 8) by avoiding difficult-to-grasp features of an object and/or by avoiding overlapping objects.”; [0029]: “The method is preferably performed using a system, an example of which is shown in FIG. 2, including: an end effector 110, a robot arm 120, a sensor suite 130, a computing system 140, and/or any other suitable components. The system functions to enable selection of a candidate grasp location and/or articulate the robot arm to grasp an object 104 associated with the grasp location.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Barry to incorporate the teachings of Ku and provide a means to clear a plurality of objects occluding a target object, with a reasonable expectation of success, in order to make a path for the robot to access the target object.
Regarding claim 15, Barry teaches a system for grasp synthesis of non-occluded and occluded objects with a camera-equipped robot manipulator (see at least Fig. 1A and abstract: “The method includes receiving a selection input indicating a user-selection of a target object represented in an image corresponding to the space. The target object is for grasping by an end-effector of a robotic manipulator of the robot.”; [0024]: “The sensors 132 may include vision/image sensors, inertial sensors (e.g., an inertial measurement unit (IMU)), force sensors, and/or kinematic sensors. Some examples of sensors 132 include a camera such as a stereo camera, a time-of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.”), said system comprising:
a robotic arm comprising a plurality of arm base actuators, an arm first link, an arm second link connected to said arm first link via an elbow joint, wrist actuators (see at least Fig. 1A and [0022]: “To illustrate an example, FIG. 1A depicts the arm 126 with three members 128 corresponding to a lower member 128.sub.L, an upper member 128.sub.U, and a hand member 128.sub.H (e.g., shown as an end-effector 150). Here, the lower member 128.sub.L may rotate or pivot about a first arm joint J.sub.A1 located adjacent to the body 110 (e.g., where the arm 126 connects to the body 110 of the robot 100). The lower member 128.sub.L is coupled to the upper member 128.sub.U at a second arm joint J.sub.A2 and the upper member 128.sub.U is coupled to the hand member 128.sub.H at a third arm joint J.sub.A3. In some examples, such as FIG. 1A, the hand member 128.sub.H or end-effector 150 is a mechanical gripper that includes a moveable jaw and a fixed jaw configured to perform different types of grasping of elements within the environment 10….In other words, the fourth joint J.sub.A4 may function as a twist joint similarly to the third joint J.sub.A3 or wrist joint of the arm 126 adjacent the hand member 128.sub.H. For instance, as a twist joint, one member coupled at the joint J may move or rotate relative to another member coupled at the joint J (e.g., a first member coupled at the twist joint is fixed while the second member coupled at the twist joint rotates). In some implementations, the arm 126 connects to the robot 100 at a socket on the body 110 of the robot 100.”), and at least one gripper including gripper jaws, a gripper camera (see at least Fig. 1A and [0022]: “In some examples, such as FIG. 1A, the hand member 128.sub.H or end-effector 150 is a mechanical gripper that includes a moveable jaw and a fixed jaw configured to perform different types of grasping of elements within the environment 10. The moveable jaw is configured to move relative to the fixed jaw in order to move between an open position for the gripper and a closed position for the gripper (e.g., closed around an object).”; [0024]: “In order to maneuver about the environment 10 or to perform tasks using the arm 126, the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b). The sensors 132 may include vision/image sensors, inertial sensors (e.g., an inertial measurement unit (IMU)), force sensors, and/or kinematic sensors. Some examples of sensors 132 include a camera such as a stereo camera, a time-of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.”), wherein said robotic arm is configured to execute a grasp of a target object, and wherein said grasp of said target object is determined via a method for grasp synthesis (see at least [0022]: “In some examples, such as FIG. 1A, the hand member 128.sub.H or end-effector 150 is a mechanical gripper that includes a moveable jaw and a fixed jaw configured to perform different types of grasping of elements within the environment 10.”; [0035]: “In some implementations, the target object selected by the user corresponds to a respective object for an end-effector 150 of a robotic manipulator of the robot 100 to grasp.”), wherein said method comprises:
estimating surface of a target object (see at least [0032]: “Referring now to FIG. 1B, the sensor system 130 of the robot 100 generates a three-dimensional point cloud of sensor data 134 for an area or space or volume within the environment 10 about the robot 100. Although referred to as a three-dimensional point cloud of sensor data 134, it should be understood that the sensor data 134 may represent a three-dimensional portion of the environment 10 or a two-dimensional portion (such as a surface or plane) of the environment 10. In other words, the sensor data 134 may be a three-dimensional point cloud or a two-dimensional collection of points.”);
identifying an appropriate grasp proposal method (see at least Figs. 2A, 2B, and [0040]: “In some implementations, after the robot 100 begins to execute the initial grasp geometry 212I, the grasping system 200 may determine a new grasp geometry 212N to grasp the target object. The grasping system 200 may determine, after the robot 100 begins execution of the initial grasp geometry 212I, a new grasp geometry 212N that improves and/or refines the grasp geometry 212 being executed. Here, an improvement or refinement in the grasp geometry 212 may correspond to a grasp geometry 212 that is more efficient (e.g., a more cost effective grasp in terms of energy or motion), has a higher likelihood of success, has a more optimal execution time (e.g., faster or slower), etc., to grasp the target object when compared to the initial grasp geometry 212I.”) and grasp scoring method (see at least Figs. 2A, 2B, and [0045]: “Referring now to FIG. 2B, in some implementations, the grasp geometry generator 210 generates a plurality of candidate grasp geometries 212, 212a-n based on the selected target object within the grasp area. In particular, the grasp geometry generator 210 generates multiple candidate grasp geometries 212 and the grasping system 200 determines which of the multiple candidate grasp geometries 212 the robot 100 should use to grasp the target object. In these implementations, the grasping system 200 includes a scorer 240 that assigns a grasping score 242 to each of the plurality of candidate grasp geometries 212. The grasping score 242 indicates an estimated or projected likelihood of success that the candidate grasp geometry 212 will successfully grasp the target object.”);
proposing a series of grasp candidates according to a selected grasp proposal method (see at least Figs. 2A, 2B, and [0040]: “In some implementations, after the robot 100 begins to execute the initial grasp geometry 212I, the grasping system 200 may determine a new grasp geometry 212N to grasp the target object. The grasping system 200 may determine, after the robot 100 begins execution of the initial grasp geometry 212I, a new grasp geometry 212N that improves and/or refines the grasp geometry 212 being executed. Here, an improvement or refinement in the grasp geometry 212 may correspond to a grasp geometry 212 that is more efficient (e.g., a more cost effective grasp in terms of energy or motion), has a higher likelihood of success, has a more optimal execution time (e.g., faster or slower), etc., to grasp the target object when compared to the initial grasp geometry 212I.”; [0044]: “In some examples, the adjuster 230 determines to continue execution of the initial grasp geometry 212I. In other examples, the adjuster 230 determines to modify the initial grasp geometry 212I to generate a modified grasp geometry 212M. That is, after receiving the updated sensor data 134U, the adjuster 230 compares the initial grasp geometry 212I and the new candidate grasp geometry 212N and determines that it should modify the initial grasp geometry 212I.”; [0045]: “Referring now to FIG. 2B, in some implementations, the grasp geometry generator 210 generates a plurality of candidate grasp geometries 212, 212a-n based on the selected target object within the grasp area.”);
assigning a quality score to each of said grasp candidates proposed according to a selected grasp scoring method (see at least Figs. 2A, 2B, and [0045]: “In particular, the grasp geometry generator 210 generates multiple candidate grasp geometries 212 and the grasping system 200 determines which of the multiple candidate grasp geometries 212 the robot 100 should use to grasp the target object. In these implementations, the grasping system 200 includes a scorer 240 that assigns a grasping score 242 to each of the plurality of candidate grasp geometries 212. The grasping score 242 indicates an estimated or projected likelihood of success that the candidate grasp geometry 212 will successfully grasp the target object.”; [0047]: “The selector 220 is configured to select the respective candidate grasp geometry 212 with a greatest grasping score 242 as a grasp geometry 212 for the robot 100 to use to grasp the target object. The grasping score 242 may be generated by a scoring algorithm that accounts for different factors that identify an overall performance for a given grasping geometry 212…As an example, the selector 220 receives three candidate grasp geometries 212 that include grasping scores 242 of 0.6, 0.4, and 0.8. In this example, the selector 220 determines the candidate grasp geometry 212 with the grasping score 0.8 has the highest likelihood to successfully grasp the target object. The selector 220 sends the selected candidate grasp geometry 212 (e.g., initial grasp geometry 212I) from the plurality of candidate grasp geometries 212 to the control system 170. The control system 170 instructs the robot 100 to execute the candidate grasp geometry 212 with the grasping score 242 of 0.8 as initial grasp geometry 212I.”); and
applying a post-processing and filtering method to said grasp candidates (see at least [0047]: “The selector 220 is configured to select the respective candidate grasp geometry 212 with a greatest grasping score 242 as a grasp geometry 212 for the robot 100 to use to grasp the target object. The grasping score 242 may be generated by a scoring algorithm that accounts for different factors that identify an overall performance for a given grasping geometry 212…As an example, the selector 220 receives three candidate grasp geometries 212 that include grasping scores 242 of 0.6, 0.4, and 0.8. In this example, the selector 220 determines the candidate grasp geometry 212 with the grasping score 0.8 has the highest likelihood to successfully grasp the target object. The selector 220 sends the selected candidate grasp geometry 212 (e.g., initial grasp geometry 212I) from the plurality of candidate grasp geometries 212 to the control system 170. The control system 170 instructs the robot 100 to execute the candidate grasp geometry 212 with the grasping score 242 of 0.8 as initial grasp geometry 212I.”);
a user interaction device configured to present visual and audio feedback to a user and accept user input and feedback (see at least Fig. 1B and [0033]: “A user 12 may interact with the robot 100 via the remote controller 20 that communicates with the robot 100 to perform actions. Additionally, the robot 100 may communicate with the remote controller 20 to display an image on a user interface 300 (e.g., UI 300) of the remote controller 20.”; [0034]: “The image displayed on the UI 300 may include one or more objects that are present in the environment 10 (e.g., within a field of view F.sub.V for a sensor 132 of the robot 100)…The UI 300 allows the user 12 to select an object displayed in the two-dimensional image as a target object in order to instruct the robot 100 to perform an action upon the selected target object in the three-dimensional environment 10.”; [0067]: “To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.”);
at least one processor in communication with said robot manipulator and said user interaction device (see at least Figs. 1B, 4, and [0030]: “In some implementations, as shown in FIGS. 1A and 1B, the robot 100 includes a control system 170. The control system 170 may be configured to communicate with systems of the robot 100, such as the at least one sensor system 130. The control system 170 may perform operations and other functions using hardware 140.”; [0033]: “A user 12 may interact with the robot 100 via the remote controller 20 that communicates with the robot 100 to perform actions.”; [0059]: “The computing device 400 includes a processor 410 (e.g., data processing hardware), memory 420 (e.g., memory hardware), a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430.”); and
at least one memory in communication with said at least one processor, configured to receive and store data from said robot manipulator and said user interaction device (see at least Figs. 1B, 4, and [0059]: “The computing device 400 includes a processor 410 (e.g., data processing hardware), memory 420 (e.g., memory hardware), a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430….The processor 410 can process instructions for execution within the computing device 400, including instructions stored in the memory 420 or on the storage device 430 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 480 coupled to high speed interface 440.”; [0060]: “The memory 420 stores information non-transitorily within the computing device 400. The memory 420 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 420 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 400.”).
Barry fails to explicitly teach estimating surface normals of a target object.
However, Ku teaches a method and system for object grasping with a robot that comprises a robot manipulator that estimates surface normals of a target object (see at least [0064]: “In a first variant, S100 can include capturing one or more images of the inference scene and/or retrieving one or more images of the inference scene stored during a previous cycle of the method; optionally determining a 3D point cloud and optionally surface normals for each point of the point cloud based on the one or more images and/or contemporaneously sampled depth information; processing the one or more images, the point cloud, and/or surface normals, using the detector to determine the key points and the occlusion score.”; [0085]: “The graspability score can be determined based on: a grasp probability score, an object (or keypoint) detection score (e.g., detected or not), the occlusion score, the candidate grasp location's corresponding 3D location (e.g., height of the 3D grasp location), proximity to the edge of a scene, proximity to edge of the object, grasp location's associated surface normal, whether the depths within a predetermined radius of the candidate grasp location are within a predetermined range of the grasp location's depth (e.g., the surface planarity), and/or any other suitable parameter.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Barry to incorporate the teachings of Ku and provide a means to estimate surface normals of a target object, with a reasonable expectation of success, in order to consider the surface information of the object when assigning a grasp score [0085].
Regarding claim 16, modified Barry teaches the limitations of claim 15. Barry further teaches wherein said user interaction device comprises a touch display (see at least Fig. 1B and [0034]: “The UI 300 allows the user 12 to select an object displayed in the two-dimensional image as a target object in order to instruct the robot 100 to perform an action upon the selected target object in the three-dimensional environment 10.”; [0053]: “The grasp geometry generator 210 receives the user-selected target object from the UI 300 and sensor data 134 (e.g., three-dimensional point cloud). The user 12 selects the target object on the two-dimensional image on the UI 300 that corresponds to the three-dimensional point cloud of data 134 for the field of view F.sub.V of the robot 100.”; [0067]: “To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.”).
Regarding claim 18, modified Barry teaches the limitations of claim 15. Barry further teaches wherein said robot manipulator is further configured to clear a plurality of objects (see at least [0022]: “In the examples shown, the robot 100 includes an arm 126 that functions as a robotic manipulator. The arm 126 may be configured to move about multiple degrees of freedom in order to engage elements of the environment 10 (e.g., objects within the environment 10).”).
Barry fails to explicitly teach clearing a plurality of objects occluding said target object.
However, Ku teaches a method and system for object grasping with a robot that comprises a robot manipulator to clear a plurality of objects occluding a target object (see at least Figs. 2, 8, and [0025]: “First, variants of the system and method enable object grasping from a bin of objects, wherein the objects can be overlapping with other objects and in any random pose. Such variants can improve the grasp success rate for highly occluded objects and/or object scenes (e.g., an example is shown in FIG. 8) by avoiding difficult-to-grasp features of an object and/or by avoiding overlapping objects.”; [0029]: “The method is preferably performed using a system, an example of which is shown in FIG. 2, including: an end effector 110, a robot arm 120, a sensor suite 130, a computing system 140, and/or any other suitable components. The system functions to enable selection of a candidate grasp location and/or articulate the robot arm to grasp an object 104 associated with the grasp location.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Barry to incorporate the teachings of Ku and provide a means to clear a plurality of objects occluding a target object, with a reasonable expectation of success, in order to make a path for the robot to access the target object.
Regarding claim 19, modified Barry teaches the limitations of claim 15. Barry further teaches wherein the robot manipulator is mounted on a mobile base (see at least Fig. 1A and [0020]: “Referring to FIGS. 1A and 1B, the robot 100 includes a body 110 with locomotion based structures such as legs 120a-d coupled to the body 110 that enable the robot 100 to move about the environment 10. In some examples, each leg 120 is an articulable structure such that one or more joints J permit members 122 of the leg 120 to move.”).
Regarding claim 20, modified Barry teaches the limitations of claim 19. Barry further teaches wherein said mobile base is a legged robot (see at least Fig. 1A and [0020]: “Referring to FIGS. 1A and 1B, the robot 100 includes a body 110 with locomotion based structures such as legs 120a-d coupled to the body 110 that enable the robot 100 to move about the environment 10. In some examples, each leg 120 is an articulable structure such that one or more joints J permit members 122 of the leg 120 to move.”).
13. Claims 8 and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barry et al. (US 20220193894, hereinafter Barry) and Ku et al. (US 20220016767, hereinafter Ku) in further view of Prats (US 20170326728, hereinafter Prats).
Regarding claim 8, Barry teaches a method for grasp synthesis of non-occluded and occluded objects with a camera-equipped robot manipulator (see at least Fig. 1A and abstract: “The method includes receiving a selection input indicating a user-selection of a target object represented in an image corresponding to the space. The target object is for grasping by an end-effector of a robotic manipulator of the robot.”; [0024]: “The sensors 132 may include vision/image sensors, inertial sensors (e.g., an inertial measurement unit (IMU)), force sensors, and/or kinematic sensors. Some examples of sensors 132 include a camera such as a stereo camera, a time-of-flight (TOF) sensor, a scanning light-detection and ranging (LIDAR) sensor, or a scanning laser-detection and ranging (LADAR) sensor.”), said method comprising:
estimating surface of a target object (see at least [0032]: “Referring now to FIG. 1B, the sensor system 130 of the robot 100 generates a three-dimensional point cloud of sensor data 134 for an area or space or volume within the environment 10 about the robot 100. Although referred to as a three-dimensional point cloud of sensor data 134, it should be understood that the sensor data 134 may represent a three-dimensional portion of the environment 10 or a two-dimensional portion (such as a surface or plane) of the environment 10. In other words, the sensor data 134 may be a three-dimensional point cloud or a two-dimensional collection of points.”);
identifying an appropriate grasp proposal method (see at least Figs. 2A, 2B, and [0040]: “In some implementations, after the robot 100 begins to execute the initial grasp geometry 212I, the grasping system 200 may determine a new grasp geometry 212N to grasp the target object. The grasping system 200 may determine, after the robot 100 begins execution of the initial grasp geometry 212I, a new grasp geometry 212N that improves and/or refines the grasp geometry 212 being executed. Here, an improvement or refinement in the grasp geometry 212 may correspond to a grasp geometry 212 that is more efficient (e.g., a more cost effective grasp in terms of energy or motion), has a higher likelihood of success, has a more optimal execution time (e.g., faster or slower), etc., to grasp the target object when compared to the initial grasp geometry 212I.”) and grasp scoring method (see at least Figs. 2A, 2B, and [0045]: “Referring now to FIG. 2B, in some implementations, the grasp geometry generator 210 generates a plurality of candidate grasp geometries 212, 212a-n based on the selected target object within the grasp area. In particular, the grasp geometry generator 210 generates multiple candidate grasp geometries 212 and the grasping system 200 determines which of the multiple candidate grasp geometries 212 the robot 100 should use to grasp the target object. In these implementations, the grasping system 200 includes a scorer 240 that assigns a grasping score 242 to each of the plurality of candidate grasp geometries 212. The grasping score 242 indicates an estimated or projected likelihood of success that the candidate grasp geometry 212 will successfully grasp the target object.”);
proposing a series of grasp candidates according to a selected grasp proposal method (see at least Figs. 2A, 2B, and [0040]: “In some implementations, after the robot 100 begins to execute the initial grasp geometry 212I, the grasping system 200 may determine a new grasp geometry 212N to grasp the target object. The grasping system 200 may determine, after the robot 100 begins execution of the initial grasp geometry 212I, a new grasp geometry 212N that improves and/or refines the grasp geometry 212 being executed. Here, an improvement or refinement in the grasp geometry 212 may correspond to a grasp geometry 212 that is more efficient (e.g., a more cost effective grasp in terms of energy or motion), has a higher likelihood of success, has a more optimal execution time (e.g., faster or slower), etc., to grasp the target object when compared to the initial grasp geometry 212I.”; [0044]: “In some examples, the adjuster 230 determines to continue execution of the initial grasp geometry 212I. In other examples, the adjuster 230 determines to modify the initial grasp geometry 212I to generate a modified