Prosecution Insights
Last updated: April 19, 2026
Application No. 18/476,768

AUTOMATED WORKPIECE TRANSFER SYSTEMS AND METHODS OF IMPLEMENTING THEREOF

Final Rejection §103
Filed
Sep 28, 2023
Examiner
VISCARRA, RICARDO I
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ats Corporation
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
21 granted / 34 resolved
+9.8% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
23 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
16.4%
-23.6% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks , filed 09/10/2, with respect to the rejection(s) of claim(s) 1 and 19 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art as Applicant’s amendment changed the scope of the claims thus necessitating the new grounds of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 7-11, 13, 17, 19-23, 25-29, 31, and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Humayun et al. (US 20220016766 A1, hereinafter Humayun) in view of Wellman et al. (US 20160167228 A, hereinafter Wellman), and further in view of Bradski et al. (US 20160221187 A1, hereinafter Bradski). Regarding claim 1, Humayan teaches: A method of operating an autonomous pick-and-place robot to transfer a plurality of workpieces, the pick-and-place robot in communication with a processor and an imaging device (see Figs. 6-8; at least as in paragraph 0028, wherein “The method is preferably performed using the system, examples of which are shown in FIGS. 2A and 2B, including: an end effector 110, a robotic arm 120, a sensor suite 130, a computing system 140, and/or any other suitable components. The system functions to enable selection of a grasp point 105 and/or articulate the robotic arm to grasp a target object associated with the grasp point 105”; at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0032, wherein “The computing system can include a control system, which can control the robotic arm, end effector, imaging systems, and/or any other system component”), the method comprising: capturing, by the imaging device, an initial image of one or more workpieces loaded onto a loading area (at least as in paragraph 0084, wherein the method begins with “capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”; at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0049, wherein “The scene 102 can include: a container 103, a surface, one or more objects 104 or no objects (e.g., the container or surface is empty), and/or any other components”); operating the processor to: apply a machine-learning model to the initial image to identify one or more pickable workpieces from the one or more workpieces, the machine-learning model being generated based on a set of training images in which one or more related workpieces were identified as (at least as in paragraph 0033, wherein “The object detector functions to detect objects and/or other information in images… the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.), total object count, and/or other object information”; at least as in paragraph 0034, wherein “The object detector can be a neural network”; at least as in paragraph 0023, wherein “The object detectors can be trained using synthetic data (and/or annotated real-world data) and subsequently used to guide real-world training data generation”; at least as in paragraph 0064, wherein “training images can be labelled with a plurality of grasp outcomes (e.g., a grasp outcome for each of a plurality of grasp points), and/or otherwise suitably labelled. The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose; an end effector pose (e.g., as determined by a grasp planner; an index associated therewith, such as an index along a kinematic branch for the robotic arm; in joint space, in cartesian space, etc.), and/or any other suitable label parameters”); identify a region of interest within the initial image, the region of interest comprising an engagement portion of the one or more pickable workpieces for an end-of-arm-tooling component of the pick-and-place robot to engage the one or more pickable workpieces (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”; at least as in paragraph 0070 wherein “The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.), and/or any other suitable information for any other suitable portion of the image (examples shown in FIG. 4 and FIG. 5). The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”); and based on the initial image, define a set of operating parameters for operating the end-of-arm-tooling component to retrieve the one or more pickable workpieces (at least as in paragraph 0043, wherein “The computing system can include a motion planner 148, which functions to determine control instructions for the robotic arm to execute a grasp attempt for a selected grasp point. The motion planner can employ any suitable control scheme (e.g., feedforward control, feedback control, etc.). The control instructions can include a trajectory for the robotic arm in joint (or cartesian) coordinate space, and/or can include any other suitable control instructions (e.g., CNC waypoints, etc.)”; at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”; at least as in paragraph 0100, wherein “Planning the object grasp can include calculating a trajectory by performing motion planning (e.g., from a current end effector position to the pre-grasp pose and from the pre-grasp pose to the grasp pose; from a current end effector position to the grasp pose, etc.) for the grasp point and/or the grasp pose”); and operating the end-of-arm-tooling component to retrieve the one or more pickable workpieces from the loading area (at least as in paragraph 0096, wherein “Executing an object grasp at the grasp point S400 can function to grasp an object at the grasp point selected in S300. S400 can be performed for a predetermined number of grasp points selected in S300 and/or for a single grasp point selected in S300. S400 can be performed based on the output of the graspability network from S300, the grasp point selected in S300, and/or based on any other suitable information”). However, Humayun does not explicitly disclose “non-defective and pickable… and to transfer the one or more pickable workpieces… transfer the one or more pickable workpieces to a receiving area.” Wellman, in the same field of endeavor of robot manipulator control for grasping operations, specifically teaches “non-defective and pickable… and to transfer the one or more pickable workpieces” (at least as in paragraph 0085, “The grasping strategy selection module 730 can determine a grasping strategy for a particular item. For example, the grasping strategy selection module 730 may utilize information from any or all of the attribute detection module 710, the database query module 715, the human-based grasping strategy module 720, the constraints module 725, and the grasping strategy evaluation module 740 to determine a grasping strategy for a particular item and the environments in which the item is to be grasped, moved, and/or released. In addition to determining how an item is to be grasped, or as an alternative, the grasping strategy selection module 730 may be involved in determining whether to grasp something using a robotic arm 12. For example, if the attribute detection module 710 detects damage to an item 40, the grasping strategy selection module 730 may instruct an appropriate response, such as selecting a grasping strategy that includes refraining from grasping the damaged item and locating another item of the same type that is undamaged instead”; at least as in paragraph 0086, “the grasping strategy instruction module 735 may instruct movement of a mobile drive unit carrying an inventory holder to a station having a robotic arm, provide instructions to cause a shipping container to be placed in a receiving zone for the robotic arm, and instruct the robotic arm to perform a series of actions to carry out a grasping strategy that facilitates moving an inventory item from the inventory holder to the shipping container”; at least as in paragraph 0082, “the human-based grasping strategy module 720 can provide a virtual environment in which a human can perform or direct a grasping action for an item to facilitate machine learning of information for learning, developing, and/or determining a grasping strategy for the robotic arm 12 to grasp a target item”). Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “transfer the one or more pickable workpieces to a receiving area” (at least as in paragraph 0061, wherein “The robotic arm 102 may then be controlled to pick up the box 222 using gripper 104 and place the box 222 onto the conveyer belt 110 (e.g., to transport box 222 into a storage area)”; at least as in paragraph 0152, wherein “at block 510, method 500 involves providing instructions to cause the robotic manipulator to grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Wellman’s teaching of a manipulator control system determining a grasp strategy based on whether an object is damaged and should be grasped and Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Wellman teaches wherein the grasping control system increases efficiency and throughput by improving the system’s capability to effectively move items by identifying target items to be grasped and Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 2, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1 comprising operating the processor to: identify a feature of a workpiece of the one or more workpieces shown in the initial image corresponding to a pre-determined position feature (at least as in paragraph 0052, wherein “The object parameters can be determined using an object detector… The object parameters can be: object keypoints (e.g., keypoints along the object surface, bounding box corners, side centroids, centroid, etc.), object axes (e.g., major axis, minor axis, a characteristic axis, etc.), object pose, surface normal vectors, and/or any other suitable object parameters”; at least as in paragraph 0064, wherein “The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose”); and extract position data for the workpiece based on the feature identified as corresponding to the pre-determined position feature, the position data being representative of the position of the workpiece (at least as in paragraph 0054, wherein “The images can be labelled based on grasp outcome (e.g., grasp success or grasp failure) of an object at a point associated with a selected pixel (x, y) of the image (e.g., the physical point on an object can be mapped to the pixel in the image, the image pixel can be selected and mapped to the physical point on an object, etc.), a region of pixels, a coordinate position (e.g., sensor frame, cartesian frame, joint frame, etc.), detected object region, and/or other suitable image features/coordinates. Additionally or alternatively, an object pose (and/or an image thereof) can be labelled with an outcome for a grasp point in the object coordinate frame”; at least as in paragraph 0055, wherein “The labelling can include labelling the image feature depicting the grasp point (e.g., selected grasp point, grasp point that was actually grasped, the physical point corresponding to the grasp point, etc.) and/or labelling a physical (3D) point in the scene (e.g., in a cartesian/sensor coordinate frame, joint coordinate frame, etc.)”; at least as in paragraph 0072, wherein “The graspability map is preferably related to the object detections (e.g., output by the object detector) via the image (e.g., via the image features of the image), but can alternatively be related to the object detections through the physical scene (e.g., wherein both the object detections and the grasp scores are mapped to a 3D representation of the scene to determine object parameter-grasp score associations)”). Regarding claim 3, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method claim 1, comprising operating the processor to: identify a feature of a workpiece of the one or more workpieces shown in the initial image corresponding to a pre-determined orientation feature (at least as in paragraph 0052, wherein “The object parameters can be determined using an object detector… The object parameters can be: object keypoints (e.g., keypoints along the object surface, bounding box corners, side centroids, centroid, etc.), object axes (e.g., major axis, minor axis, a characteristic axis, etc.), object pose, surface normal vectors, and/or any other suitable object parameters”; at least as in paragraph 0064, wherein “The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose”); and extract orientation data for the workpiece based on the feature identified as corresponding to the pre-determined orientation feature, the orientation data being representative of the orientation of the workpiece (at least as in paragraph 0064, wherein “The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose”; at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”). Regarding claim 4, the above combination of Humayun, Wellman, and Bradski discloses the method of claim 1, but does not explicitly teach: wherein the set of operating parameters further comprise a first retract path defining a path along which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces. However, Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “wherein the set of operating parameters further comprise a first retract path defining a path along which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces” (at least as in paragraph 0153, wherein “FIG. 6D showing the robotic arm 602 moving object 608 through a determined motion path 614 to the drop-off location 612 and subsequently, the robotic arm 602 places the object at the drop-off location 612”; at least as in paragraph 0097, wherein “FIG. 6A showing a robotic arm 602 equipped with a sensor 604 and a gripping component 606 (“gripper 606”) for gripping an object 608 located inside a bin 610”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 5, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1, wherein the set of operating parameters further comprise a first approach angle defining an angle at which the end-of-arm-tooling component moves towards the engagement portion of the one or more pickable workpieces (at least as in paragraph 0055, wherein “The label can be a single class label per pixel, such as a binary label (e.g., 1 for grasp success, 0 for grasp fail, etc.), a percentage (e.g., grasp success likelihood, such as calculated from prior attempts to grasp points similar to the selected grasp point), and/or any other suitable label; a multi-class label per pixel, such as binary labels for different angles of arrival at a particular point on the object, grasp success score (e.g., calculated based on resultant in-hand pose, force feedback, insertion accuracy, etc.); and/or any other suitable label”; at least as in paragraph 0101, wherein “Executing the object grasp can optionally include labelling the grasp point based on the grasp outcome (e.g., label the point with a 0 for grasp fail and a 1 for grasp success, or any other suitable label), the angle of arrival, and/or otherwise labelling or not labelling the grasp point”). Regarding claim 7, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1, wherein: the end-of-arm-tooling component comprises a vacuum having a vacuum cup size; and the engagement portion of the one or more pickable workpieces comprises a surface area that can accommodate the vacuum cup size (at least as in paragraph 0029, wherein “In a first example, the end effector is a suction gripper”; at least as in paragraph 0061, wherein “when the end effector is a suction gripper, a pressure measurement device can measure the pressure. When the pressure change is above a threshold, the grasp point can be labelled as a grasp success and otherwise labelled as a grasp failure. If the pressure change is above a threshold for less than a predetermined period (e.g., before an instruction to drop the object), then the grasp point can be labelled as a grasp failure (e.g., the object was grasped and dropped)”; at least as in paragraph 0070, wherein “The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”; at least as in paragraph 0086, wherein “The grasp can be a pixel (or point associated therewith) and/or a set thereof (e.g., contiguous pixel set cooperatively representing a physical region substantially similar to the robotic manipulator's grasping area)”). Regarding claim 8, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1, wherein: the end-of-arm-tooling component comprises a gripper having a gripper size and a gripper stroke; and the engagement portion of the one or more pickable workpieces comprises edge portions that can accommodate the gripper size and gripper stroke (at least as in paragraph 0029, wherein “In a second example, the end effector is a claw gripper (e.g., dual prong, tri-prong, etc.)”; at least as in paragraph 0062, wherein “The grasp point can be labelled as a grasp failure when: … the finger gripper is open beyond a predetermined width… The grasp point can be labelled as a grasp success when the force between fingers is above a predetermined threshold, if the gripper is open to within a predetermined width (e.g., associated with the width of an object), and/or any other suitable condition”; at least as in paragraph 0070, wherein “The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”; at least as in paragraph 0086, wherein “The grasp can be a pixel (or point associated therewith) and/or a set thereof (e.g., contiguous pixel set cooperatively representing a physical region substantially similar to the robotic manipulator's grasping area)”). Regarding claim 9, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 8, comprising operating the processor to: determine a clear space around a workpiece of the one or more workpieces; and identify pickable workpieces further based on the clear space around the workpiece, the gripper size, and the gripper stroke of the gripper (at least as in paragraph 0049, wherein “scenes can additionally or alternatively include sparse objects which are separated by at least a threshold distance, non-overlapping, non-occluded objects, or can include any other suitable object distribution”; at least as in paragraph 0033, wherein “the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.)”; at least as in paragraph 0053, wherein “The grasp point can be selected based on the object parameters determined by the object detector (e.g., using an object selector), using heuristics (e.g., proximity to an edge of the object container, amount of occlusion”; at least as in paragraph 0071, wherein “the graspability map can span pixels associated with and/or directed towards a plurality of objects of an object scene (e.g., overlapping objects, occluded objects, etc.)”; at least as in paragraph 0070, wherein “The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”; at least as in paragraph 0086, wherein “The grasp can be a pixel (or point associated therewith) and/or a set thereof (e.g., contiguous pixel set cooperatively representing a physical region substantially similar to the robotic manipulator's grasping area)”). Regarding claim 10, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1, comprising the set of operating parameters further comprise a second approach path defining a path along which the end-of-arm-tooling component with the one or more pickable workpieces engaged therein moves towards from the receiving area for placing the one or more pickable workpieces in the desired position and desired orientation (at least as in paragraph 0102, wherein “Executing the object grasp can optionally include determining a next trajectory for a next grasp point while executing the object grasp. The next grasp point can be the grasp point with the next best score, randomly selected, and/or otherwise selected based on the output of the graspability network from S300 (e.g., using the object selector)”; at least as in paragraph 0100, wherein “Planning the object grasp can include calculating a trajectory by performing motion planning (e.g., from a current end effector position to the pre-grasp pose and from the pre-grasp pose to the grasp pose; from a current end effector position to the grasp pose, etc.) for the grasp point and/or the grasp pose”). However, Humayun does not explicitly disclose “operating the end-of-arm-tooling component to place the one or more pickable workpieces in a desired position and a desired orientation on the receiving area.” Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “operating the end-of-arm-tooling component to place the one or more pickable workpieces in a desired position and a desired orientation on the receiving area” (at least as in paragraph 0065 & 0138, wherein “An optimizer may accept arbitrary constraints in the form of costs, such as to keep a certain distance away from objects or to approach a goal position from a given angle”; at least as in paragraph 0141, wherein “To set up the path optimization problem, the system may construct a set of waypoints in joint space which define the path, with the first waypoint being the start position, and the last waypoint the goal position”; at least as in paragraph 0139, wherein “the robotic arm may move along a path, from its initial pose to a grasp or viewpoint pose, and then to a drop-off pose or a second viewpoint pose… there may be two goal poses in Cartesian space such as the grasping pose and the drop-off pose”; at least as in paragraph 0153, wherein “the robotic arm 602 moving object 608 through a determined motion path 614 to the drop-off location 612 and subsequently, the robotic arm 602 places the object at the drop-off location 612”; at least as in paragraph 0126, wherein poses are defined in “six degree-of-freedom Cartesian pose (e.g., XYZ and three Euler angles)”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 11, the above combination of Humayun, Wellman, and Bradski discloses the method of claim 10, but does not explicitly teach: wherein the set of operating parameters further comprise a second approach angle defining an angle at which the end-of-arm-tooling component moves towards the receiving area while engaged with the one or more pickable workpieces for placing the one or more pickable workpieces in the desired position and desired orientation. However, Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “wherein the set of operating parameters further comprise a second approach angle defining an angle at which the end-of-arm-tooling component moves towards the receiving area while engaged with the one or more pickable workpieces for placing the one or more pickable workpieces in the desired position and desired orientation” (at least as in paragraph 0097, wherein “the robotic arm 602 may be configured to move the object 608 to drop-off location 612”; at least as in paragraph 0065 & 0138, wherein “An optimizer may accept arbitrary constraints in the form of costs, such as to keep a certain distance away from objects or to approach a goal position from a given angle”; at least as in paragraph 0141, wherein “To set up the path optimization problem, the system may construct a set of waypoints in joint space which define the path, with the first waypoint being the start position, and the last waypoint the goal position”; at least as in paragraph 0139, wherein “the robotic arm may move along a path, from its initial pose to a grasp or viewpoint pose, and then to a drop-off pose or a second viewpoint pose… there may be two goal poses in Cartesian space such as the grasping pose and the drop-off pose”; at least as in paragraph 0153, wherein “the robotic arm 602 moving object 608 through a determined motion path 614 to the drop-off location 612 and subsequently, the robotic arm 602 places the object at the drop-off location 612”; at least as in paragraph 0126, wherein poses are defined in “six degree-of-freedom Cartesian pose (e.g., XYZ and three Euler angles)”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 13, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1, wherein: the loading area comprises a first loading area and a second loading area; and the method comprises: capturing, by the imaging device, an initial image of a first set of the one or more workpieces loaded onto the first loading area and a second image of a second set of the one or more workpieces loaded onto the second loading area (at least as in paragraph 0059, wherein “The robot executing a grasp attempt for image labelling in S100 can be: the same robotic arm which will employ the robot during production (e.g., training data generated for an individual robot) or a duplicative instance thereof, a different robotic arm (e.g., using the same type of end effector; a dedicated training robot; etc.) and/or any other suitable robot”; at least as in paragraph 0083, wherein “S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static), but can alternatively be performed on old images of the same or different scene”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”; at least as in paragraph 0047, wherein “The method can be performed once, iteratively (e.g., for identical instances of each method element; for with distinct variants of method elements, etc.), repeatedly, periodically, and/or occur with any other suitable timing”); and operating the end-of-arm-tooling component to retrieve the one or more pickable workpieces from the first set of the one or more workpieces loaded onto the first loading area while operating the processor to apply the machine learning to the second image to identify one or more pickable workpieces from the second set of one or more workpieces loaded onto the second loading area (at least as in paragraph 0083, wherein “S300 is preferably performed after training (e.g., during runtime or inference), but can be performed at any other suitable time (e.g., such as during active training). S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static), but can alternatively be performed on old images of the same or different scene”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”; at least as in paragraph 0123, wherein “wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein”). Regarding claim 17, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1 comprising repeatedly (at least as in paragraph 0047, wherein “The method can be performed once, iteratively (e.g., for identical instances of each method element; for with distinct variants of method elements, etc.), repeatedly, periodically, and/or occur with any other suitable timing”; in paragraph 0123, wherein “wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein”): capturing an additional image of the one or more of workpieces loaded onto the loading area (at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0083, wherein “S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static)”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”); operating the processor to: apply the machine-learning model to the additional image to identify one pickable workpiece from the one or more workpieces (at least as in paragraph 0033, wherein “The object detector functions to detect objects and/or other information in images… the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.), total object count, and/or other object information”); identify a region of interest within the additional image, the region of interest comprising an engagement portion of the one pickable workpiece (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”); and based on the additional image, define a set of operating parameters for operating the end-of-arm-tooling component to retrieve the one pickable workpiece (at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”); and operating the end-of-arm-tooling component to retrieve the one pickable workpiece from the loading area and transfer the one pickable workpiece to the receiving area according to the set of operating parameters (at least as in paragraph 0096, wherein “Executing an object grasp at the grasp point S400 can function to grasp an object at the grasp point selected in S300). Regarding claim 19, Humayan teaches: A system to transfer a plurality of workpieces, the system comprising: an imaging device operable to capture an initial image of one or more workpieces loaded onto a loading area (at least as in paragraph 0084, wherein the method begins with “capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”; at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0049, wherein “The scene 102 can include: a container 103, a surface, one or more objects 104 or no objects (e.g., the container or surface is empty), and/or any other components”); an autonomous pick-and-place robot comprising an end-of-arm-tooling component operable to retrieve one or more pickable workpieces from the loading area (see Figs. 6-8; at least as in paragraph 0028, wherein “The method is preferably performed using the system, examples of which are shown in FIGS. 2A and 2B, including: an end effector 110, a robotic arm 120, a sensor suite 130, a computing system 140, and/or any other suitable components. The system functions to enable selection of a grasp point 105 and/or articulate the robotic arm to grasp a target object associated with the grasp point 105”); and a processor in communication with the imaging device and the pick-and-place robot (at least as in paragraph 0032, wherein “The computing system can include a control system, which can control the robotic arm, end effector, imaging systems, and/or any other system component”), the processor operable to: apply a machine-learning model to the initial image to identify the one or more pickable workpieces from the one or more workpieces, the machine-learning model being generated based on a set of training images in which one or more related workpieces were identified as (at least as in paragraph 0033, wherein “The object detector functions to detect objects and/or other information in images… the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.), total object count, and/or other object information”; at least as in paragraph 0034, wherein “The object detector can be a neural network”; at least as in paragraph 0023, wherein “The object detectors can be trained using synthetic data (and/or annotated real-world data) and subsequently used to guide real-world training data generation”; at least as in paragraph 0064, wherein “training images can be labelled with a plurality of grasp outcomes (e.g., a grasp outcome for each of a plurality of grasp points), and/or otherwise suitably labelled. The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose; an end effector pose (e.g., as determined by a grasp planner; an index associated therewith, such as an index along a kinematic branch for the robotic arm; in joint space, in cartesian space, etc.), and/or any other suitable label parameters”); identify a region of interest within the initial image, the region of interest comprising an engagement portion of the one or more pickable workpieces for the end-of-arm-tooling component of the pick-and-place robot to engage the one or more pickable workpieces (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”; at least as in paragraph 0070 wherein “The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.), and/or any other suitable information for any other suitable portion of the image (examples shown in FIG. 4 and FIG. 5). The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”); and based on the initial image, define the set of operating parameters for operating the end-of-arm-tooling component to retrieve the one or more pickable workpieces (at least as in paragraph 0043, wherein “The computing system can include a motion planner 148, which functions to determine control instructions for the robotic arm to execute a grasp attempt for a selected grasp point. The motion planner can employ any suitable control scheme (e.g., feedforward control, feedback control, etc.). The control instructions can include a trajectory for the robotic arm in joint (or cartesian) coordinate space, and/or can include any other suitable control instructions (e.g., CNC waypoints, etc.)”; at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”; at least as in paragraph 0100, wherein “Planning the object grasp can include calculating a trajectory by performing motion planning (e.g., from a current end effector position to the pre-grasp pose and from the pre-grasp pose to the grasp pose; from a current end effector position to the grasp pose, etc.) for the grasp point and/or the grasp pose”). However, Humayun does not explicitly disclose “non-defective and pickable… and to transfer the one or more pickable workpieces… transfer the one or more pickable workpieces to a receiving area.” Wellman, in the same field of endeavor of robot manipulator control for grasping operations, specifically teaches “non-defective and pickable… and to transfer the one or more pickable workpieces” (at least as in paragraph 0085, “The grasping strategy selection module 730 can determine a grasping strategy for a particular item. For example, the grasping strategy selection module 730 may utilize information from any or all of the attribute detection module 710, the database query module 715, the human-based grasping strategy module 720, the constraints module 725, and the grasping strategy evaluation module 740 to determine a grasping strategy for a particular item and the environments in which the item is to be grasped, moved, and/or released. In addition to determining how an item is to be grasped, or as an alternative, the grasping strategy selection module 730 may be involved in determining whether to grasp something using a robotic arm 12. For example, if the attribute detection module 710 detects damage to an item 40, the grasping strategy selection module 730 may instruct an appropriate response, such as selecting a grasping strategy that includes refraining from grasping the damaged item and locating another item of the same type that is undamaged instead”; at least as in paragraph 0086, “the grasping strategy instruction module 735 may instruct movement of a mobile drive unit carrying an inventory holder to a station having a robotic arm, provide instructions to cause a shipping container to be placed in a receiving zone for the robotic arm, and instruct the robotic arm to perform a series of actions to carry out a grasping strategy that facilitates moving an inventory item from the inventory holder to the shipping container”; at least as in paragraph 0082, “the human-based grasping strategy module 720 can provide a virtual environment in which a human can perform or direct a grasping action for an item to facilitate machine learning of information for learning, developing, and/or determining a grasping strategy for the robotic arm 12 to grasp a target item”). Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “transfer the one or more pickable workpieces to a receiving area” (at least as in paragraph 0061, wherein “The robotic arm 102 may then be controlled to pick up the box 222 using gripper 104 and place the box 222 onto the conveyer belt 110 (e.g., to transport box 222 into a storage area)”; at least as in paragraph 0152, wherein “at block 510, method 500 involves providing instructions to cause the robotic manipulator to grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Wellman’s teaching of a manipulator control system determining a grasp strategy based on whether an object is damaged and should be grasped and Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Wellman teaches wherein the grasping control system increases efficiency and throughput by improving the system’s capability to effectively move items by identifying target items to be grasped and Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 20, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein the processor is operable to: identify a feature of a workpiece of the one or more workpieces shown in the initial image corresponding to a pre-determined position feature (at least as in paragraph 0052, wherein “The object parameters can be determined using an object detector… The object parameters can be: object keypoints (e.g., keypoints along the object surface, bounding box corners, side centroids, centroid, etc.), object axes (e.g., major axis, minor axis, a characteristic axis, etc.), object pose, surface normal vectors, and/or any other suitable object parameters”; at least as in paragraph 0064, wherein “The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose”); and extract position data for the workpiece based on the feature identified as corresponding to the pre-determined position feature, the position data being representative of the position of the workpiece (at least as in paragraph 0054, wherein “The images can be labelled based on grasp outcome (e.g., grasp success or grasp failure) of an object at a point associated with a selected pixel (x, y) of the image (e.g., the physical point on an object can be mapped to the pixel in the image, the image pixel can be selected and mapped to the physical point on an object, etc.), a region of pixels, a coordinate position (e.g., sensor frame, cartesian frame, joint frame, etc.), detected object region, and/or other suitable image features/coordinates. Additionally or alternatively, an object pose (and/or an image thereof) can be labelled with an outcome for a grasp point in the object coordinate frame”; at least as in paragraph 0055, wherein “The labelling can include labelling the image feature depicting the grasp point (e.g., selected grasp point, grasp point that was actually grasped, the physical point corresponding to the grasp point, etc.) and/or labelling a physical (3D) point in the scene (e.g., in a cartesian/sensor coordinate frame, joint coordinate frame, etc.)”; at least as in paragraph 0072, wherein “The graspability map is preferably related to the object detections (e.g., output by the object detector) via the image (e.g., via the image features of the image), but can alternatively be related to the object detections through the physical scene (e.g., wherein both the object detections and the grasp scores are mapped to a 3D representation of the scene to determine object parameter-grasp score associations)”). Regarding claim 21, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein the processor is operable to: identify a feature of a workpiece of the one or more workpieces shown in the initial image corresponding to a pre-determined orientation feature (at least as in paragraph 0052, wherein “The object parameters can be determined using an object detector… The object parameters can be: object keypoints (e.g., keypoints along the object surface, bounding box corners, side centroids, centroid, etc.), object axes (e.g., major axis, minor axis, a characteristic axis, etc.), object pose, surface normal vectors, and/or any other suitable object parameters”; at least as in paragraph 0064, wherein “The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose”); and extract orientation data for the workpiece based on the feature identified as corresponding to the pre-determined orientation feature, the orientation data being representative of the orientation of the workpiece (at least as in paragraph 0064, wherein “The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose”; at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”). Regarding claim 22, the above combination of Humayun, Wellman, and Bradski discloses the method of claim 19, but does not explicitly teach: wherein the set of operating parameters further comprise a first retract path defining a path along which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces. Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “wherein the set of operating parameters further comprise a first retract path defining a path along which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces” (at least as in paragraph 0153, wherein “FIG. 6D showing the robotic arm 602 moving object 608 through a determined motion path 614 to the drop-off location 612 and subsequently, the robotic arm 602 places the object at the drop-off location 612”; at least as in paragraph 0097, wherein “FIG. 6A showing a robotic arm 602 equipped with a sensor 604 and a gripping component 606 (“gripper 606”) for gripping an object 608 located inside a bin 610”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 23, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein the set of operating parameters further comprise a first approach angle defining an angle at which the end-of-arm-tooling component moves towards the engagement portion of the one or more pickable workpieces (at least as in paragraph 0055, wherein “The label can be a single class label per pixel, such as a binary label (e.g., 1 for grasp success, 0 for grasp fail, etc.), a percentage (e.g., grasp success likelihood, such as calculated from prior attempts to grasp points similar to the selected grasp point), and/or any other suitable label; a multi-class label per pixel, such as binary labels for different angles of arrival at a particular point on the object, grasp success score (e.g., calculated based on resultant in-hand pose, force feedback, insertion accuracy, etc.); and/or any other suitable label”; at least as in paragraph 0101, wherein “Executing the object grasp can optionally include labelling the grasp point based on the grasp outcome (e.g., label the point with a 0 for grasp fail and a 1 for grasp success, or any other suitable label), the angle of arrival, and/or otherwise labelling or not labelling the grasp point”). Regarding claim 25, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein: the end-of-arm-tooling component comprises a vacuum having a vacuum cup size ; and the engagement portion of the one or more pickable workpieces comprises a surface area that can accommodate the vacuum cup size (at least as in paragraph 0029, wherein “In a first example, the end effector is a suction gripper”; at least as in paragraph 0061, wherein “when the end effector is a suction gripper, a pressure measurement device can measure the pressure. When the pressure change is above a threshold, the grasp point can be labelled as a grasp success and otherwise labelled as a grasp failure. If the pressure change is above a threshold for less than a predetermined period (e.g., before an instruction to drop the object), then the grasp point can be labelled as a grasp failure (e.g., the object was grasped and dropped)”; at least as in paragraph 0070, wherein “The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”; at least as in paragraph 0086, wherein “The grasp can be a pixel (or point associated therewith) and/or a set thereof (e.g., contiguous pixel set cooperatively representing a physical region substantially similar to the robotic manipulator's grasping area)”). Regarding claim 26, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein: the end-of-arm-tooling component comprises a gripper having a gripper size and a gripper stroke; and the engagement portion of the one or more pickable workpieces comprises edge portions that can accommodate the gripper size and gripper stroke (at least as in paragraph 0029, wherein “In a second example, the end effector is a claw gripper (e.g., dual prong, tri-prong, etc.)”; at least as in paragraph 0062, wherein “The grasp point can be labelled as a grasp failure when: … the finger gripper is open beyond a predetermined width… The grasp point can be labelled as a grasp success when the force between fingers is above a predetermined threshold, if the gripper is open to within a predetermined width (e.g., associated with the width of an object), and/or any other suitable condition”; at least as in paragraph 0070, wherein “The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”; at least as in paragraph 0086, wherein “The grasp can be a pixel (or point associated therewith) and/or a set thereof (e.g., contiguous pixel set cooperatively representing a physical region substantially similar to the robotic manipulator's grasping area)”). Regarding claim 27, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 26, wherein the processor is operable to: determine a clear space around a workpiece of the one or more workpieces; and identify pickable workpieces further based on the clear space around the workpiece, the gripper size, and the gripper stroke (at least as in paragraph 0049, wherein “scenes can additionally or alternatively include sparse objects which are separated by at least a threshold distance, non-overlapping, non-occluded objects, or can include any other suitable object distribution”; at least as in paragraph 0033, wherein “the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.)”; at least as in paragraph 0053, wherein “The grasp point can be selected based on the object parameters determined by the object detector (e.g., using an object selector), using heuristics (e.g., proximity to an edge of the object container, amount of occlusion”; at least as in paragraph 0071, wherein “the graspability map can span pixels associated with and/or directed towards a plurality of objects of an object scene (e.g., overlapping objects, occluded objects, etc.)”; at least as in paragraph 0070, wherein “The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”; at least as in paragraph 0086, wherein “The grasp can be a pixel (or point associated therewith) and/or a set thereof (e.g., contiguous pixel set cooperatively representing a physical region substantially similar to the robotic manipulator's grasping area)”). Regarding claim 28, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein: the set of operating parameters further comprise a second approach path defining a path along which the end-of-arm-tooling component with the one or more pickable workpieces engaged therein moves towards from the receiving area for placing the one or more pickable workpieces in the desired position and desired orientation (at least as in paragraph 0102, wherein “Executing the object grasp can optionally include determining a next trajectory for a next grasp point while executing the object grasp. The next grasp point can be the grasp point with the next best score, randomly selected, and/or otherwise selected based on the output of the graspability network from S300 (e.g., using the object selector)”; at least as in paragraph 0100, wherein “Planning the object grasp can include calculating a trajectory by performing motion planning (e.g., from a current end effector position to the pre-grasp pose and from the pre-grasp pose to the grasp pose; from a current end effector position to the grasp pose, etc.) for the grasp point and/or the grasp pose”). However, Humayun does not explicitly disclose “the end-of-arm-tooling component is operable to place the one or more pickable workpieces in a desired position and a desired orientation on the receiving area.” Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “the end-of-arm-tooling component is operable to place the one or more pickable workpieces in a desired position and a desired orientation on the receiving area” (at least as in paragraph 0097, wherein “the robotic arm 602 may be configured to move the object 608 to drop-off location 612”; at least as in paragraph 0065 & 0138, wherein “An optimizer may accept arbitrary constraints in the form of costs, such as to keep a certain distance away from objects or to approach a goal position from a given angle”; at least as in paragraph 0141, wherein “To set up the path optimization problem, the system may construct a set of waypoints in joint space which define the path, with the first waypoint being the start position, and the last waypoint the goal position”; at least as in paragraph 0139, wherein “the robotic arm may move along a path, from its initial pose to a grasp or viewpoint pose, and then to a drop-off pose or a second viewpoint pose… there may be two goal poses in Cartesian space such as the grasping pose and the drop-off pose”; at least as in paragraph 0153, wherein “the robotic arm 602 moving object 608 through a determined motion path 614 to the drop-off location 612 and subsequently, the robotic arm 602 places the object at the drop-off location 612”; at least as in paragraph 0126, wherein poses are defined in “six degree-of-freedom Cartesian pose (e.g., XYZ and three Euler angles)”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 29, the above combination of Humayun, Wellman, and Bradski discloses the method of claim 28, but does not explicitly teach: wherein the set of operating parameters further comprise a second approach angle defining an angle at which the end-of-arm-tooling component moves towards the receiving area while engaged with the one or more pickable workpieces for placing the one or more pickable workpieces in the desired position and desired orientation. However, Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “a second approach angle defining an angle at which the end-of-arm-tooling component moves towards the receiving area while engaged with the one or more pickable workpieces for placing the one or more pickable workpieces in the desired position and desired orientation” (at least as in paragraph 0097, wherein “the robotic arm 602 may be configured to move the object 608 to drop-off location 612”; at least as in paragraph 0065 & 0138, wherein “An optimizer may accept arbitrary constraints in the form of costs, such as to keep a certain distance away from objects or to approach a goal position from a given angle”; at least as in paragraph 0141, wherein “To set up the path optimization problem, the system may construct a set of waypoints in joint space which define the path, with the first waypoint being the start position, and the last waypoint the goal position”; at least as in paragraph 0139, wherein “the robotic arm may move along a path, from its initial pose to a grasp or viewpoint pose, and then to a drop-off pose or a second viewpoint pose… there may be two goal poses in Cartesian space such as the grasping pose and the drop-off pose”; at least as in paragraph 0153, wherein “the robotic arm 602 moving object 608 through a determined motion path 614 to the drop-off location 612 and subsequently, the robotic arm 602 places the object at the drop-off location 612”; at least as in paragraph 0126, wherein poses are defined in “six degree-of-freedom Cartesian pose (e.g., XYZ and three Euler angles)”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path. Regarding claim 31, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein: the loading area comprises a first loading area and a second loading area; the imaging device is operable to capture an initial image of a first set of the one or more workpieces loaded onto the first loading area and a second image of a second set of the one or more workpieces loaded onto the second loading area (at least as in paragraph 0059, wherein “The robot executing a grasp attempt for image labelling in S100 can be: the same robotic arm which will employ the robot during production (e.g., training data generated for an individual robot) or a duplicative instance thereof, a different robotic arm (e.g., using the same type of end effector; a dedicated training robot; etc.) and/or any other suitable robot”; at least as in paragraph 0083, wherein “S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static), but can alternatively be performed on old images of the same or different scene”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”; at least as in paragraph 0047, wherein “The method can be performed once, iteratively (e.g., for identical instances of each method element; for with distinct variants of method elements, etc.), repeatedly, periodically, and/or occur with any other suitable timing”); and the end-of-arm-tooling component is operable to retrieve the one or more pickable workpieces from the first set of the one or more workpieces loaded onto the first loading area while the processor is operated to apply the machine learning to the second image to identify one or more pickable workpieces from the second set of one or more workpieces loaded onto the second loading area (at least as in paragraph 0083, wherein “S300 is preferably performed after training (e.g., during runtime or inference), but can be performed at any other suitable time (e.g., such as during active training). S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static), but can alternatively be performed on old images of the same or different scene”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”; at least as in paragraph 0123, wherein “wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein”). Regarding claim 35, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein: the imaging device is operable to capture additional images of the one or more of workpieces loaded onto the loading area (at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0083, wherein “S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static)”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”); and for each additional image (at least as in paragraph 0047, wherein “The method can be performed once, iteratively (e.g., for identical instances of each method element; for with distinct variants of method elements, etc.), repeatedly, periodically, and/or occur with any other suitable timing”; in paragraph 0123, wherein “wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein”), the processor is operable to: apply the machine-learning model to the additional image to identify one pickable workpiece from the one or more workpieces (at least as in paragraph 0033, wherein “The object detector functions to detect objects and/or other information in images… the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.), total object count, and/or other object information”); identify a region of interest within the additional image, the region of interest comprising an engagement portion of the one pickable workpiece (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”); and based on the additional image, define a set of operating parameters for operating the end-of-arm-tooling component to retrieve the one pickable workpiece (at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”); and the end-of-arm-tooling component is operable to retrieve the one pickable workpiece from the loading area and transfer the one pickable workpiece to the receiving area according to the set of operating parameters (at least as in paragraph 0096, wherein “Executing an object grasp at the grasp point S400 can function to grasp an object at the grasp point selected in S300). Claim(s) 6, 12, 24, and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Humayun et al. (US 20220016766 A1, hereinafter Humayun) in view of Wellman et al. (US 20160167228 A, hereinafter Wellman) and Bradski et al. (US 20160221187 A1, hereinafter Bradski), and further in view of Li (US 20230297068 A1). Regarding claim 6, the above combination of Humayun, Wellman, and Bradski discloses the method of claim 5, but does not explicitly teach: wherein the set of operating parameters further comprise a first retract angle defining an angle at which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces. However, Li discloses a robot system configured to, based on various pick-up conditions for grasping operations, grasp a plurality of objects, generate training data, and train a machine learning model. Li specifically teaches “wherein the set of operating parameters further comprise a first retract angle defining an angle at which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces” (at least as in paragraph 0055, wherein “The receiving unit 110 may receive the pick-up condition and store such a pick-up condition in the storage unit 14, the pick-up condition being input by the user via the input unit 12 and including the information on the movable range of the pick-up hand 31. Specifically, the receiving unit 110 may receive information and store such information in the storage unit 14, the information indicating a limit value of an operation parameter indicating the movable range of the pick-up hand 31, such as a limit range of a gripping width in an open/closed state in the case of the gripping pick-up hand 31, a limit range of an operation angle of each joint in a case where the pick-up hand 31 has an articulated structure, and a limit range of the angle of inclination of the pick-up hand 31 upon pick-up”; see also 0072). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Li’s teaching of a manipulator system configured to determine candidates for the pick-position of the workpiece based on the pick-up conditions, since Li teaches wherein the manipulator system with a pick-up condition limiting the range of an operation angle avoids collision with a surrounding obstacle thus improving operation safety and efficiency. Regarding claim 12, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1, comprising operating the processor to select the end-of-arm-tooling component (at least as in paragraph 0029, wherein “The end effector can be impactive, ingressive, astrictive, contigutive, and/or any other suitable type of end effector”; at least as in paragraph 0054, wherein “Additionally or alternatively, the images can be labelled with end effector parameters (e.g., gripper state, grasp pressure/force, etc.)”; at least as in paragraph 0059, wherein “The grasp outcome for the selected grasp pixel is preferably determined based on the type of end effector performing the grasp attempt”; at least as in paragraph 0097, wherein “The grasp is preferably executed by the computing system and/or the robot (e.g., with the same end effector used to generate the labelled images or a different end effector)”). However, Humayun does not explicitly disclose “from amongst a plurality of end-of-arm-tooling components of the pick-and-place robot.” Li discloses a robot system configured to, based on various pick-up conditions for grasping operations, grasp a plurality of objects, generate training data, and train a machine learning model. Li specifically teaches “from amongst a plurality of end-of-arm-tooling components of the pick-and-place robot” (at least as in paragraph 0052, wherein “The receiving unit 110 may receive the pick-up condition, which includes the information on the type of pick-up hand 31, the shape and size of the portion contacting the workpiece 50, etc., input by the user via the input unit 12, and may store the pick-up condition in the later-described storage unit 14. That is, the receiving unit 110 may receive information and store such information in the storage unit 14, the information including information on whether the pick-up hand 31 is of the air suction type or the gripping type, information on the shape and size of a suction pad contact portion where the pick-up hand 31 contacts the workpiece 50, information on the number of suction pads, information on the interval and distribution of a plurality of pads in a case where the pick-up hand 31 has the plurality of suction pads, and information on the shape and size of a portion where a gripping finger of the pick-up hand 31 contacts the workpiece 50, the number of gripping fingers, and the interval and distribution of the gripping fingers in a case where the pick-up hand 31 is of the gripping type”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Li’s teaching of a manipulator system configured to determine candidates for the pick-position of the workpiece based on the pick-up conditions, since Li teaches wherein the manipulator system with a pick-up condition including the type of pick-up hand avoids collision with a surrounding obstacle thus improving operation safety and efficiency. Regarding claim 24, the above combination of Humayun, Wellman, and Bradski discloses the system of claim 23, but does not explicitly teach: wherein the set of operating parameters further comprise a first retract angle defining an angle at which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces. However, Li discloses a robot system configured to, based on various pick-up conditions for grasping operations, grasp a plurality of objects, generate training data, and train a machine learning model. Li specifically teaches “wherein the set of operating parameters further comprise a first retract angle defining an angle at which the end-of-arm-tooling component moves away from the loading area while engaged with the one or more pickable workpieces” (at least as in paragraph 0055, wherein “The receiving unit 110 may receive the pick-up condition and store such a pick-up condition in the storage unit 14, the pick-up condition being input by the user via the input unit 12 and including the information on the movable range of the pick-up hand 31. Specifically, the receiving unit 110 may receive information and store such information in the storage unit 14, the information indicating a limit value of an operation parameter indicating the movable range of the pick-up hand 31, such as a limit range of a gripping width in an open/closed state in the case of the gripping pick-up hand 31, a limit range of an operation angle of each joint in a case where the pick-up hand 31 has an articulated structure, and a limit range of the angle of inclination of the pick-up hand 31 upon pick-up”; see also 0072). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Li’s teaching of a manipulator system configured to determine candidates for the pick-position of the workpiece based on the pick-up conditions, since Li teaches wherein the manipulator system with a pick-up condition limiting the range of an operation angle avoids collision with a surrounding obstacle thus improving operation safety and efficiency. Regarding claim 30, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein the processor is operable to select the end-of-arm-tooling component (at least as in paragraph 0029, wherein “The end effector can be impactive, ingressive, astrictive, contigutive, and/or any other suitable type of end effector”; at least as in paragraph 0054, wherein “Additionally or alternatively, the images can be labelled with end effector parameters (e.g., gripper state, grasp pressure/force, etc.)”; at least as in paragraph 0059, wherein “The grasp outcome for the selected grasp pixel is preferably determined based on the type of end effector performing the grasp attempt”; at least as in paragraph 0097, wherein “The grasp is preferably executed by the computing system and/or the robot (e.g., with the same end effector used to generate the labelled images or a different end effector)”). However, Humayun does not explicitly disclose “from amongst a plurality of end-of-arm-tooling components of the pick-and-place robot.” Li discloses a robot system configured to, based on various pick-up conditions for grasping operations, grasp a plurality of objects, generate training data, and train a machine learning model. Li specifically teaches “from amongst a plurality of end-of-arm-tooling components of the pick-and-place robot” (at least as in paragraph 0052, wherein “The receiving unit 110 may receive the pick-up condition, which includes the information on the type of pick-up hand 31, the shape and size of the portion contacting the workpiece 50, etc., input by the user via the input unit 12, and may store the pick-up condition in the later-described storage unit 14. That is, the receiving unit 110 may receive information and store such information in the storage unit 14, the information including information on whether the pick-up hand 31 is of the air suction type or the gripping type, information on the shape and size of a suction pad contact portion where the pick-up hand 31 contacts the workpiece 50, information on the number of suction pads, information on the interval and distribution of a plurality of pads in a case where the pick-up hand 31 has the plurality of suction pads, and information on the shape and size of a portion where a gripping finger of the pick-up hand 31 contacts the workpiece 50, the number of gripping fingers, and the interval and distribution of the gripping fingers in a case where the pick-up hand 31 is of the gripping type”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Li’s teaching of a manipulator system configured to determine candidates for the pick-position of the workpiece based on the pick-up conditions, since Li teaches wherein the manipulator system with a pick-up condition including the type of pick-up hand avoids collision with a surrounding obstacle thus improving operation safety and efficiency. Claim(s) 14 and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Humayun et al. (US 20220016766 A1, hereinafter Humayun) in view of Wellman et al. (US 20160167228 A, hereinafter Wellman) and Bradski et al. (US 20160221187 A1, hereinafter Bradski), and further in view of Skyum et al. (US 20230150777 A1, hereinafter Skyum). Regarding claim 14, the above combination of Humayun, Wellman, and Bradski discloses the method of claim 1, but does not explicitly disclose wherein: the loading area is moveable between an imaging location and a picking location; and the method comprises maintaining the loading area at the imaging location while the imaging device captures the initial image and moving the loading area to the picking location prior to operating the end-of-arm-tooling component to engage the one or more pickable workpieces. However, Skyum discloses a robot system with a pick and place robot arranged to pick an object from a continuously moving feeding conveyor transporting a stream of objects in bulk. Skyum specifically teaches wherein: “the loading area is moveable between an imaging location and a picking location; and the method comprises maintaining the loading area at the imaging location while the imaging device captures the initial image and moving the loading area to the picking location prior to operating the end-of-arm-tooling component to engage the one or more pickable workpieces” (at least as in Fig. 1 & paragraph 0106, wherein “the robotic actuator RA is a gantry type actuator, i.e. it has a fixed part with one set of elongated elements mounted on the ground in one end adjacent to the feeding conveyor FC and in the opposite end adjacent to the induction I1, so as to allow a movable part of the robotic actuator RA to move along the elongated elements to move a controllable gripper G between a gripping area GA where to pick up and object G_O on the feeding conveyor FC and to a target area TA on the induction I1”; at least as in paragraph 0109, wherein “The basic input to the pick and place robot RA, G, CS is a sensor system with a 3D camera CM mounted on a fixed position to provide a 3D image IM of an image area IMA upstream of the position of the pick and place robot RA, G”; at least as in paragraph 0110, wherein “The image area IMA is preferably located at least a minimum distance upstream of the gripping area GA”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Skyum’s teaching of a robot system configured to pick and place objects from a gripping area separate from an image area, since Skyum teaches wherein the robot system picks and places objects from bulk with a high rate of success and at a high throughput. Regarding claim 32, the above combination of Humayun, Wellman, and Bradski discloses the system of claim 19, but does not explicitly disclose wherein: the loading area is moveable between an imaging location and a picking location; and the loading area is operable to remain at the imaging location while the imaging device captures the initial image and move to the picking location prior to operating the end-of-arm-tooling component to engage the one or more pickable workpieces. However, Skyum discloses a robot system with a pick and place robot arranged to pick an object from a continuously moving feeding conveyor transporting a stream of objects in bulk. Skyum specifically teaches wherein: “the loading area is moveable between an imaging location and a picking location; and the loading area is operable to remain at the imaging location while the imaging device captures the initial image and move to the picking location prior to operating the end-of-arm-tooling component to engage the one or more pickable workpieces” (at least as in Fig. 1 & paragraph 0106, wherein “the robotic actuator RA is a gantry type actuator, i.e. it has a fixed part with one set of elongated elements mounted on the ground in one end adjacent to the feeding conveyor FC and in the opposite end adjacent to the induction I1, so as to allow a movable part of the robotic actuator RA to move along the elongated elements to move a controllable gripper G between a gripping area GA where to pick up and object G_O on the feeding conveyor FC and to a target area TA on the induction I1”; at least as in paragraph 0109, wherein “The basic input to the pick and place robot RA, G, CS is a sensor system with a 3D camera CM mounted on a fixed position to provide a 3D image IM of an image area IMA upstream of the position of the pick and place robot RA, G”; at least as in paragraph 0110, wherein “The image area IMA is preferably located at least a minimum distance upstream of the gripping area GA”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Skyum’s teaching of a robot system configured to pick and place objects from a gripping area separate from an image area, since Skyum teaches wherein the robot system picks and places objects from bulk with a high rate of success and at a high throughput. Claim(s) 18 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Humayun et al. (US 20220016766 A1, hereinafter Humayun) in view of Wellman et al. (US 20160167228 A, hereinafter Wellman) and Bradski et al. (US 20160221187 A1, hereinafter Bradski), and further in view of Ito (US 20170057092 A1). Regarding claim 18, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The method of claim 1 comprising: operating the processor to: apply the machine-learning model to the initial image to identify a plurality of pickable workpieces from the one or more workpieces (at least as in paragraph 0033, wherein “The object detector functions to detect objects and/or other information in images… the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.), total object count, and/or other object information”; at least as in paragraph 0034, wherein “The object detector can be a neural network”; at least as in paragraph 0023, wherein “The object detectors can be trained using synthetic data (and/or annotated real-world data) and subsequently used to guide real-world training data generation”; at least as in paragraph 0064, wherein “training images can be labelled with a plurality of grasp outcomes (e.g., a grasp outcome for each of a plurality of grasp points), and/or otherwise suitably labelled. The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose; an end effector pose (e.g., as determined by a grasp planner; an index associated therewith, such as an index along a kinematic branch for the robotic arm; in joint space, in cartesian space, etc.), and/or any other suitable label parameters”); identify a region of interest within the initial image, the region of interest comprising an engagement portion of the first pickable workpiece for the end-of-arm-tooling component to engage the first pickable workpiece (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”; at least as in paragraph 0070 wherein “The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.), and/or any other suitable information for any other suitable portion of the image (examples shown in FIG. 4 and FIG. 5). The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”); and based on the initial image, define a first set of operating parameters for operating the end-of-arm-tooling component to retrieve the first pickable workpiece (at least as in paragraph 0043, wherein “The computing system can include a motion planner 148, which functions to determine control instructions for the robotic arm to execute a grasp attempt for a selected grasp point. The motion planner can employ any suitable control scheme (e.g., feedforward control, feedback control, etc.). The control instructions can include a trajectory for the robotic arm in joint (or cartesian) coordinate space, and/or can include any other suitable control instructions (e.g., CNC waypoints, etc.)”; at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”; at least as in paragraph 0100, wherein “Planning the object grasp can include calculating a trajectory by performing motion planning (e.g., from a current end effector position to the pre-grasp pose and from the pre-grasp pose to the grasp pose; from a current end effector position to the grasp pose, etc.) for the grasp point and/or the grasp pose”); and after operating the end-of-arm-tooling component to retrieve the first pickable workpiece from the loading area (at least as in paragraph 0102, wherein “Executing the object grasp can optionally include determining a next trajectory for a next grasp point while executing the object grasp”; at least as in paragraph 0047, wherein “The method can be performed once, iteratively (e.g., for identical instances of each method element; for with distinct variants of method elements, etc.), repeatedly, periodically, and/or occur with any other suitable timing”; in paragraph 0123, wherein “wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein”), capturing a second image of the one or more of workpieces loaded onto the loading area (at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0083, wherein “S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static)”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”); and operating the processor to: identify a region of interest within the second image, the region of interest comprising an engagement portion of the second pickable workpiece for the end-of-arm-tooling component to engage the second pickable workpiece (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”); and based on the second image, define a second set of operating parameters for operating the end-of-arm-tooling component to retrieve the second pickable workpiece (at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”); and operating the end-of-arm-tooling component to retrieve the second pickable workpiece from the loading area set of operating parameters (at least as in paragraph 0096, wherein “Executing an object grasp at the grasp point S400 can function to grasp an object at the grasp point selected in S300”). Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “transfer the second pickable workpiece to the receiving area” (at least as in paragraph 0061, wherein “The robotic arm 102 may then be controlled to pick up the box 222 using gripper 104 and place the box 222 onto the conveyer belt 110 (e.g., to transport box 222 into a storage area)”; at least as in paragraph 0152, wherein “at block 510, method 500 involves providing instructions to cause the robotic manipulator to grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location”; at least as in paragraph 0080, wherein “the process may be repeated until all the boxes have been placed or the target container can no longer fit in another box”). Ito discloses a processing apparatus for determining a work to be picked by a robot from a plurality of works using captured images of an area on which the works are placed. Ito specifically teaches “compare the second image to the initial image to identify a second pickable workpiece in the second image corresponding to a workpiece identified as being pickable in the initial image” (at least as in paragraph 0030, wherein “When the operation of assigning the priorities of the blocks starts, the processing apparatus 200 divides, in step S11, the area in the pallet 400 within an image capturing range into a plurality of blocks… In step S12, the processing apparatus 200 obtains the orientation of the robot 300, when viewed from the three-dimensional measurement apparatus 100, at the time of picking of a work in each block. The orientation of the robot 300 may be obtained by actual image capturing by the three-dimensional measurement apparatus 100”; at least as in paragraph 0033, wherein “At this time, in step S26, after the robot 300 completes picking of the target work and before the robot 300 completes a retreat, the image capture device 130 captures the pallet 400 again. This is done to confirm in step S27 whether the picking operation of the robot 300 has changed the positions of the pickable candidate works other than the picked target work, and determine whether the candidate works can be subsequently picked. If among the pickable candidate works captured in step S21 before picking of the target work, there is a candidate work whose position remains unchanged during the picking operation of the robot 300, the information of the three-dimensional measurement result obtained in step S21 can be used intact. Therefore, whether there is a pickable candidate work may be determined in step S27 by obtaining changes in the position and orientation before and after picking based on a result of comparison between the image before picking and an image after picking. In addition, coarse position measurement may be performed using the captured image of picking, thereby performing determination”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location and Ito's teaching of comparing a second captured image to a first captured image to identify candidate works, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path and Ito teaches wherein the processing apparatus utilizing the two images shortens takt time thus improving operation efficiency. Regarding claim 36, in view of the above combination of Humayun, Wellman, and Bradski, Humayun further discloses: The system of claim 19, wherein: the processor is operable to: apply the machine-learning model to the initial image to identify a plurality of pickable workpieces from the one or more workpieces (at least as in paragraph 0033, wherein “The object detector functions to detect objects and/or other information in images… the object detector can determine: individual instances of one or more object types, object parameters for each object (e.g., pose, principal axis, occlusion, etc.), total object count, and/or other object information”; at least as in paragraph 0034, wherein “The object detector can be a neural network”; at least as in paragraph 0023, wherein “The object detectors can be trained using synthetic data (and/or annotated real-world data) and subsequently used to guide real-world training data generation”; at least as in paragraph 0064, wherein “training images can be labelled with a plurality of grasp outcomes (e.g., a grasp outcome for each of a plurality of grasp points), and/or otherwise suitably labelled. The images can optionally be a labelled with: object parameters associated with the grasp point/pixel (e.g., as determined by the object detector and/or grasp planner), such as: a surface normal vector, a face tag, an object principal axis pose; an end effector pose (e.g., as determined by a grasp planner; an index associated therewith, such as an index along a kinematic branch for the robotic arm; in joint space, in cartesian space, etc.), and/or any other suitable label parameters”); identify a region of interest within the initial image, the region of interest comprising an engagement portion of a first pickable workpiece for the end-of-arm-tooling component to engage the first pickable workpiece (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”; at least as in paragraph 0070 wherein “The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.), and/or any other suitable information for any other suitable portion of the image (examples shown in FIG. 4 and FIG. 5). The image feature can depict a physical region: smaller than, larger than, substantially similar to, or otherwise related to the robotic effector's grasping area”); and based on the initial image, define a first set of operating parameters for operating the end-of-arm-tooling component to retrieve the first pickable workpiece (at least as in paragraph 0043, wherein “The computing system can include a motion planner 148, which functions to determine control instructions for the robotic arm to execute a grasp attempt for a selected grasp point. The motion planner can employ any suitable control scheme (e.g., feedforward control, feedback control, etc.). The control instructions can include a trajectory for the robotic arm in joint (or cartesian) coordinate space, and/or can include any other suitable control instructions (e.g., CNC waypoints, etc.)”; at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”; at least as in paragraph 0100, wherein “Planning the object grasp can include calculating a trajectory by performing motion planning (e.g., from a current end effector position to the pre-grasp pose and from the pre-grasp pose to the grasp pose; from a current end effector position to the grasp pose, etc.) for the grasp point and/or the grasp pose”); the imaging device is operable to capture a second image of the one or more of workpieces loaded onto the loading area after the end-of-arm-tooling component retrieves the first pickable workpiece from the loading area (at least as in paragraph 0102, wherein “Executing the object grasp can optionally include determining a next trajectory for a next grasp point while executing the object grasp”; at least as in paragraph 0047, wherein “The method can be performed once, iteratively (e.g., for identical instances of each method element; for with distinct variants of method elements, etc.), repeatedly, periodically, and/or occur with any other suitable timing”; in paragraph 0123, wherein “wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein”; at least as in paragraph 0031, wherein “The sensor suite can include an imaging system which preferably functions to capture images of the inference scene”; at least as in paragraph 0083, wherein “S300 is preferably iteratively performed on new images of the scene (e.g., wherein the scene can change or be static)”; at least as in paragraph 0084, wherein “S300 can include: capturing an image of a scene during runtime (e.g., same field of view as training image, difference field of view training image)”); the processor is further operable to: identify a region of interest within the second image, the region of interest comprising an engagement portion of the second pickable workpiece for the end-of-arm-tooling component to engage the second pickable workpiece (at least as in paragraph 0085, wherein “The trained graspability network receives the image (e.g., RGB, RGB-D) as an input, and can additionally or alternatively include object detector parameters as an additional input (an example is shown in FIG. 7). The trained graspability network can output: a graspability map (e.g., including a dense map of success probabilities, object parameter values, and/or robotic end effector parameter values) and/or a success probability for multiple objects' grasps (e.g., multiple points/pixels), and/or any other suitable outputs”); based on the second image, define a second set of operating parameters for operating the end-of-arm-tooling component to retrieve the second pickable workpiece (at least as in paragraph 0058, wherein “The control instructions can be determined by a grasp planner, which can determine a robotic end effector path, robotic end effector pose, joint waypoints (e.g., in cartesian/sensor coordinate frame, in a joint coordinate frame, etc.), and/or any other suitable control instructions”); and the end-of-arm-tooling component is operable to retrieve the second pickable workpiece from the loading area (at least as in paragraph 0096, wherein “Executing an object grasp at the grasp point S400 can function to grasp an object at the grasp point selected in S300”). Bradski discloses a robotic manipulator system configured to identify characteristics of an object within an environment and determine potential grasp points on the object based on the identified characteristics. Bradski specifically discloses “transfer the second pickable workpiece to the receiving area” (at least as in paragraph 0061, wherein “The robotic arm 102 may then be controlled to pick up the box 222 using gripper 104 and place the box 222 onto the conveyer belt 110 (e.g., to transport box 222 into a storage area)”; at least as in paragraph 0152, wherein “at block 510, method 500 involves providing instructions to cause the robotic manipulator to grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location”; at least as in paragraph 0080, wherein “the process may be repeated until all the boxes have been placed or the target container can no longer fit in another box”). Ito discloses a processing apparatus for determining a work to be picked by a robot from a plurality of works using captured images of an area on which the works are placed. Ito specifically teaches “compare the second image to the initial image to identify a second pickable workpiece in the second image corresponding to a workpiece identified as being pickable in the initial image” (at least as in paragraph 0030, wherein “When the operation of assigning the priorities of the blocks starts, the processing apparatus 200 divides, in step S11, the area in the pallet 400 within an image capturing range into a plurality of blocks… In step S12, the processing apparatus 200 obtains the orientation of the robot 300, when viewed from the three-dimensional measurement apparatus 100, at the time of picking of a work in each block. The orientation of the robot 300 may be obtained by actual image capturing by the three-dimensional measurement apparatus 100”; at least as in paragraph 0033, wherein “At this time, in step S26, after the robot 300 completes picking of the target work and before the robot 300 completes a retreat, the image capture device 130 captures the pallet 400 again. This is done to confirm in step S27 whether the picking operation of the robot 300 has changed the positions of the pickable candidate works other than the picked target work, and determine whether the candidate works can be subsequently picked. If among the pickable candidate works captured in step S21 before picking of the target work, there is a candidate work whose position remains unchanged during the picking operation of the robot 300, the information of the three-dimensional measurement result obtained in step S21 can be used intact. Therefore, whether there is a pickable candidate work may be determined in step S27 by obtaining changes in the position and orientation before and after picking based on a result of comparison between the image before picking and an image after picking. In addition, coarse position measurement may be performed using the captured image of picking, thereby performing determination”). Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Humayun, to include Bradski's teaching of a manipulator system configured to determine a motion path for the gripper to follow so that the robotic arm can move the object to a particular drop-off location and Ito's teaching of comparing a second captured image to a first captured image to identify candidate works, since Bradski teaches wherein the path planning algorithm improves the manipulator operation safety and efficiency by planning a collision-free path as well as determining the quickest movement path and Ito teaches wherein the processing apparatus utilizing the two images shortens takt time thus improving operation efficiency. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICARDO ICHIKAWA VISCARRA whose telephone number is (571)270-0154. The examiner can normally be reached M-F 9-12 & 2-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICARDO I VISCARRA/Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §103
Sep 10, 2025
Response Filed
Dec 24, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12558719
BINDING DEVICE, BINDING SYSTEM, METHOD FOR CONTROLLING BINDING DEVICE, AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12545356
MICROMOBILITY ELECTRIC VEHICLE WITH WALK-ASSIST MODE
2y 5m to grant Granted Feb 10, 2026
Patent 12528400
MOBILE FULFILLMENT CONTAINER APPARATUS, SYSTEMS, AND RELATED METHODS
2y 5m to grant Granted Jan 20, 2026
Patent 12502781
ROBOT OFFSET SIMULATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12487602
IMPROVED NAVIGATION FOR A ROBOTIC WORK TOOL
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
90%
With Interview (+27.9%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month