Prosecution Insights
Last updated: April 19, 2026
Application No. 16/931,232

CONTROL OF MODULAR END-OF-ARM TOOLING FOR ROBOTIC MANIPULATORS

Final Rejection §103
Filed
Jul 16, 2020
Examiner
HOQUE, SHAHEDA SHABNAM
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kindred Systems Inc.
OA Round
4 (Final)
43%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
81%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
25 granted / 58 resolved
-8.9% vs TC avg
Strong +38% interview lift
Without
With
+37.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
61.8%
+21.8% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 58 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant argues on page 7 and 8 of the Applicant’s remarks that “But using any suitable end of arm tool and generating a grasping strategy based on the success of previous grasping strategies do not disclose or suggest determining a first optimal order in which to use each of a plurality of different tools, or scoring and ranking candidate picks to determine the first optimal order. The mere existence of multiple end of arm tools in Stubbs does not disclose or suggest determining a first optimal order in which to use each of a plurality of different tools as claimed. Additionally, Stubbs' use of data regarding past grasping strategies does not disclose or suggest scoring and ranking candidate picks as claimed. Accordingly, Stubbs fails to disclose or render obvious the elements of claim 1.”. The Examiner respectfully disagrees. Bradski already teaches determining a first optimal order (See at least Para [0021], [0026], [0048]). Bradski relies on Stubbs in the rejection for the teachings of using plurality of different tools and scoring and ranking candidate picks (See at least Para [0070]). Scoring and ranking candidate can be done using data regarding successful and unsuccessful grasp in the past (See at least Para [0070]). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bradski with the teachings of Stubbs and include the feature of determining the first optimal order by identifying a plurality of candidate picks of the individual objects in the set of objects, wherein each candidate pick specifies an action of picking up a specific object at a specific location with a specific tool by changing tools accordingly and scoring and ranking the candidate picks, thereby providing flexibility, precision, and accuracy in picking up and placing different types of items from various location. Applicant’s arguments filed on 10/29/2025 with respect to claim(s) 1-6, 8, 12-18, 21, and 22 have been considered but are not persuasive or moot in view of new ground of rejection provided below which was necessitated based on Applicant’s amendments to the independent claims. The new ground of rejection for independent claims are based on Bradski in combination of Stubbs, Holz, and Wade-McCue. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5, 8, 16, 17, 18, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski) in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), and further in view of Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue). Regarding claim 1, Bradski teaches a method, comprising: receiving a set of objects within a workspace of a robotic arm (See at least Fig 4 section 402, Para [0084] “As shown by block 402 of FIG. 4, method 400 may initially involve determining a virtual environment representing a physical environment containing a plurality of physical objects. The physical environment may be any of the types of environments containing movable objects described above, such as a bin of heterogeneous parts, a stack of boxes, or a conveyer belt containing moving objects…”); collecting information regarding the set of objects, the information including characteristics of individual objects in the set of objects (See at least Para [0084] “As shown by block 402 of FIG. 4, method 400 may initially involve determining a virtual environment representing a physical environment containing a plurality of physical objects. The physical environment may be any of the types of environments containing movable objects described above, such as a bin of heterogeneous parts, a stack of boxes, or a conveyer belt containing moving objects. The virtual environment may contain a 2D and/or 3D representation of the physical environment using any of the types of models or virtual representations described above. In particular, the virtual environment may contain information relating to geometric shapes and/or sizes of objects.”); determining, based on the collected information and the characteristics of the individual objects, a first optimal order in which to operate on the set of objects, the first optimal order specifying at least a first pick of the first optimal order and a second pick of the first optimal order to be performed after the first pick of the first optimal order (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box),…”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”, discloses first box or several boxes for the robotic device to move to a drop-off location which is construed as after first pick, second pick is performed and so on); wherein determining the first optimal order includes: identifying a plurality of candidate picks of the individual objects in the set of objects, wherein each candidate pick specifies an action of picking up a specific object at a specific location (See at least Fig 4 Section 404 “Develop a plan, based on the virtual environment, to cause a robotic manipulator to move one or more of the physical objects in the physical environment”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”, Para [0093] “In further examples, one or more sensor scans may be triggered by an unexpected measurement. For instance, according to the virtual environment, a robot arm may be commanded to pick and place a box with expected dimensions from a particular location…”) … wherein determining the first optimal order in which to operate on the set of objects includes determining a first optimal order (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box),…”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”)… wherein determining the first optimal order in which to operate on the set of objects (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box),…”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”)… performing the first pick of the first optimal order (See at least Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”, Fig 4 section 406); after performing the first pick of the first optimal order and before performing the second pick of the first optimal order, collecting additional information regarding the set of objects, the additional information including updated characteristics of individual objects in the set of objects (See at least Para [0022] “In some cases, after the process has begun and one or more objects have been moved, the environment may have changed such that the original model may no longer be valid. For instance, new information may be learned about objects underneath a box after the box has been moved. In other examples, one or more objects may have changed position in an unpredicted manner after a certain object was moved by the robot. In such cases, actions dictated by the current plan based on the current model may no longer be feasible or desirable. Thus, in some examples, part or all of the environment may be scanned again and a revised 3D model may be reconstructed. From the newly reconstructed model, the plan for moving objects may be modified accordingly in order to take into account changes in the physical environment that were unaccounted for in the original model and plan.”, Para [0091] “As shown by block 408 of FIG. 4, method 400 may additionally involve receiving updated sensor data after the robotic manipulator performs the first action. The updated sensor data may be received from any of the types of sensors using any of the scanning methods previously described and may cover any portion of the physical environment or the entire environment. Additionally, the updated sensor data may include data captured from different viewpoints using one or more movable sensors, such as sensors mounted on a robotic arm.”); determining, based on the additional collected information and the updated characteristics of the individual objects, a second optimal order in which to operate on the set of objects, the second optimal order specifying at least a first pick of the second optimal order to be performed after the first pick of the first optimal order, wherein the second optimal order is different than a remaining portion of the first optimal order (See at least Para [0022] “In some cases, after the process has begun and one or more objects have been moved, the environment may have changed such that the original model may no longer be valid. For instance, new information may be learned about objects underneath a box after the box has been moved. In other examples, one or more objects may have changed position in an unpredicted manner after a certain object was moved by the robot. In such cases, actions dictated by the current plan based on the current model may no longer be feasible or desirable. Thus, in some examples, part or all of the environment may be scanned again and a revised 3D model may be reconstructed. From the newly reconstructed model, the plan for moving objects may be modified accordingly in order to take into account changes in the physical environment that were unaccounted for in the original model and plan.”, Para [0091] “As shown by block 408 of FIG. 4, method 400 may additionally involve receiving updated sensor data after the robotic manipulator performs the first action. The updated sensor data may be received from any of the types of sensors using any of the scanning methods previously described and may cover any portion of the physical environment or the entire environment. Additionally, the updated sensor data may include data captured from different viewpoints using one or more movable sensors, such as sensors mounted on a robotic arm.”, Para [0092], Fig 4 section 408, 410, 412, Para [0100] “…In other examples, additional modifications to the plan may result in one or more additional different robot actions. In further examples, additional sensor scans may performed after removing box 322, possibly resulting in additional modifications to the plan based on additional updated sensor data.”, Para [0088] “…In additional examples, an ordering of robot actions may be chosen in order to increase feasibility of current or future objects movements …”); and performing the first pick of the second optimal order (See at least Fig 4 section 414 (Cause the robotic manipulator to perform a second action according to the modified plan)). However, Bradski does not explicitly spell out … identifying, based on the characteristics of the individual objects, a plurality of groups of objects, wherein each group is associated with a respective tool capable of manipulating each of the objects in the group; … with a specific tool; … in which to use each of a plurality of different tools coupled to a distal end of the robotic arm based on a number of tool changes required to execute the first optimal order in which to use each of the plurality of different tools; … includes scoring and ranking the candidate picks; and wherein the scoring and ranking are based on a likelihood that each of the candidate picks will succeed; … Wade-McCue teaches … identifying, based on the characteristics of the individual objects, a plurality of groups of objects, wherein each group is associated with a respective tool capable of manipulating each of the objects in the group (See at least Page 4 Col 2 “2) Grasping Class: By combining the grasp synthesis with tool selection, it can be said that each item belongs to a grasping class, which defines the entire approach that the grasping system takes for any given item, including which grasp synthesis algorithm and which tool is used when manipulating it. Each item belongs primarily to one of the five grasping class listed below: • Surface-normals, suction • RGB-D-Centroid, suction • RGB-D-Centroid, grip • RGB-centroid, suction • RGB-centroid, grip”); … … based on a number of tool changes required to execute the first optimal order in which to use each of the plurality of different tools (Fig. 1. Top: Model of Wrist including (A) Suction tool (B) Tool change motor and (C) Parallel Jaw Gripper, Bottom: The end-effector in use during the Amazon Robotics Challenge (left) picking an item with the suction tool and another one with the gripper (right)., Page 4 Col 2 Para 4 “There are typically two options for manipulators, one being an articulated arm [23] or a Cartesian based system [24]. When utilizing an articulated arm, often a hybrid end-effector is required where both attachment types are integrated into the one end-effector [25] or a tool change mechanism can be used similar to a CNC Machine.”)… Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bradski with the teachings of Wade-McCue and include the feature of identifying, based on the characteristics of the individual objects, a plurality of groups of objects, wherein each group is associated with a respective tool capable of manipulating each of the objects in the group and tool is changed accordingly, thereby improve performance and increase redundancy (See at least Page 5 Col 2 Para “Where modularity aided in the design process, the integration of complimentary systems such as the sucker and gripper greatly improved performance and increased redundancy.”). Stubbs teaches … with a specific tool (See at least Para [0069] FIG. 7 illustrates an example mobile manipulator unit and various end of arm tools that may be utilized by the mobile manipulator unit to transfer inventory within the inventory management system, in accordance with at least one embodiment. FIG. 7 includes an example mobile manipulator unit 700 with associated robotic arms 710 and a plurality of end of arm tools 720-760 for grasping inventory for transfer as described herein. In embodiments, each robotic arm 710 may be configured to utilize a particular end of arm tool…”, Fig 7); … in which to use each of a plurality of different tools coupled to a distal end of the robotic arm based on a number of tool changes required to execute the first optimal order in which to use each of the plurality of different tools (See at least Para [0069] FIG. 7 illustrates an example mobile manipulator unit and various end of arm tools that may be utilized by the mobile manipulator unit to transfer inventory within the inventory management system, in accordance with at least one embodiment. FIG. 7 includes an example mobile manipulator unit 700 with associated robotic arms 710 and a plurality of end of arm tools 720-760 for grasping inventory for transfer as described herein. In embodiments, each robotic arm 710 may be configured to utilize a particular end of arm tool, a mix of end of arm tools, or a plurality of end of arm tools 720-760. The end of arm tools may allow the mobile manipulator unit to grasp inventory items that are stored in containers of the inventory holders 230 or to grasp inventory that is loosely stored without packaging in inventory holders 230. The robotic arms may facilitate the movement of inventory items and other features of the inventory management system 210 among and between components of the inventory management system 210. The end of arm tools illustrated in FIG. 7 include a mechanical pincher 720, an adaptive gripper tool 740, and a vacuum tool 760. In embodiments, various combinations of the end of arm tools 720-760 may be utilized to grasp and transfer inventory. In an embodiment, a particular end of arm tool may be combined with another end of arm tool to be utilized on a single robotic arm. For example, the mechanical pincher 720 and vacuum tool 760 may be configured to work in combination to grasp and transfer inventory.”, discloses different types end of arm tool to grasp different items and tools can be combined which is construed as tool changes according to the type of item, Fig 7);… …includes scoring and ranking the candidate picks (See at least Para [0070] “For example, a target item, or characteristics thereof, may be identified, such as by optical or other sensors, in order to determine a grasping strategy for the item. The grasping strategy may be generated by the management module 215 based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. It should be noted that although some end of arm tools (720-760) are included in FIG. 7, any suitable end of arm tool or end effectors may be utilized for grasping items and transferring the items according to inventory transfer embodiments described herein…”, discloses information indicating grasping strategies that have been successful or unsuccessful for such items in the past which is construed as scoring and ranking the candidate picks); and … Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bradski with the teachings of Stubbs and include the feature of determining the first optimal order by identifying a plurality of candidate picks of the individual objects in the set of objects, wherein each candidate pick specifies an action of picking up a specific object at a specific location with a specific tool by changing tools accordingly and scoring and ranking the candidate picks, thereby providing flexibility, precision, and accuracy in picking up and placing different types of items from various location. Holz teaches wherein the scoring and ranking are based on a likelihood that each of the candidate picks will succeed (See at least Fig 2 (Success State), Page 1463 IV. EXPERIMENTS AND RESULTS In order to assess robustness and performance of our approach, we conducted a series of experiments of both individual components and the integrated platform. As evaluation criteria, we focus on the success rates and the execution times of the individual components and the overall cycle times (for picking an object from the pallet), Fig 6 (Success Rate)); … Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the method of Bradski with the teachings of Holz, since Holz teaches that the scoring and ranking are based on a likelihood that each of the candidate picks will succeed which will lead to precise picking selection. Regarding claim 2, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein the characteristics of the individual objects in the set of objects includes an object type of the individual objects (See at least Para [0076] “…In some embodiments, objects may be sorted into an assigned destination location by matching against a database of location assignments indexed by object type or object ID. For instance, an object's locations may be derived from reading a barcode, considering the size of the object, and/or by recognizing a particular kind of object.”). Regarding claim 3, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein the characteristics of the individual objects in the set of objects includes dimensions of the individual objects (See at least Para [0024] In one example implementation of the continuous import model, a robotic device (e.g. a device containing a robotic arm and an end effector for picking up objects) may be used to unload a truck. The robot may first capture a 3D model or a 2D facade (e.g., a 2D depth map of distances of objects from a horizontal or vertical plane) of the environment and input the model or facade into a planning simulator. The virtual environment may indicate particular geometric shapes and/or sizes of objects within the environment…”, discloses particular geometric shapes of objects which is construed as dimensions of the individual objects, Para [0076] “In additional examples, environment modeling of both the pick-and-place location may be used for intelligent grasp location and motion, as well as event reporting (e.g., when a place region is full or a pick region is empty). In some examples, object bounding volumes may be computed…”, Para [0093] “…For instance, according to the virtual environment, a robot arm may be commanded to pick and place a box with expected dimensions from a particular location…”). Regarding claim 4, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein the characteristics of the individual objects in the set of objects includes locations of the individual objects (See at least Para [0076] “In additional examples, environment modeling of both the pick-and-place location may be used for intelligent grasp location and motion, as well as event reporting (e.g., when a place region is full or a pick region is empty). In some examples, object bounding volumes may be computed and/or distinguishing features of objects may be found (such as textures, colors, barcodes or OCR). In some embodiments, objects may be sorted into an assigned destination location by matching against a database of location assignments indexed by object type or object ID…”, Para [0093] “In further examples, one or more sensor scans may be triggered by an unexpected measurement. For instance, according to the virtual environment, a robot arm may be commanded to pick and place a box with expected dimensions from a particular location…”, Para [0093] “…For instance, according to the virtual environment, a robot arm may be commanded to pick and place a box with expected dimensions from a particular location…”). Regarding claim 5, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein the characteristics of the individual objects in the set of objects includes orientations of the individual objects (See at least Para [0044] “In other examples, one or more of the sensors used by a sensing system may be a RGBaD (RGB+active Depth) color or monochrome camera registered to a depth sensing device that uses active vision techniques such as projecting a pattern into a scene to enable depth triangulation between the camera or cameras and the known offset pattern projector. This type of sensor data may help enable robust segmentation. According to various embodiments, cues such as barcodes, texture coherence, color, 3D surface properties, or printed text on the surface may also be used to identify an object and/or find its pose in order to know where and/or how to place the object (e.g., fitting the object into a fixture receptacle)…”, discloses finding pose of an object which is construed as objects orientation). Regarding claim 8, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein determining the first optimal order in which to operate on the set of objects is based on packaging types and shapes of the individual objects in the set of objects (See at least Para [0076] “In additional examples, environment modeling of both the pick-and-place location may be used for intelligent grasp location and motion, as well as event reporting (e.g., when a place region is full or a pick region is empty). In some examples, object bounding volumes may be computed and/or distinguishing features of objects may be found (such as textures, colors, barcodes or OCR). In some embodiments, objects may be sorted into an assigned destination location by matching against a database of location assignments indexed by object type or object ID.”, Para [0024] “In one example implementation of the continuous import model, a robotic device (e.g. a device containing a robotic arm and an end effector for picking up objects) may be used to unload a truck. The robot may first capture a 3D model or a 2D facade (e.g., a 2D depth map of distances of objects from a horizontal or vertical plane) of the environment and input the model or facade into a planning simulator. The virtual environment may indicate particular geometric shapes and/or sizes of objects within the environment…”). Regarding claim 16, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein determining the first optimal order in which to operate on the set of objects is based on a likelihood that each of the picks will reveal additional information regarding the set of objects (See at least Para [0022], Para [0023] In some embodiments, one or more 2D or 3D sensors may continually capture image scans of the environment for reconstruction of the environment in real time. The sensors may be mounted on a moving robotic arm, on a moveable cart on which the robotic arm is mounted, and/or at one or more fixed locations within the environment. In other examples, the system may rescan periodically with one or more sensors after a certain period of time has passed. In additional examples, the system may be configured to rescan a portion or all of the environment after a specified event, such as a box pick up by the robotic device. In the example of a box pick up, a new scan may be taken of the area where the box was located (e.g., to detect objects or portions of objects that were previously obscured by the box). Additionally, a new scan may also be taken from the view where the box was located as well or instead by moving a camera mounted on the robotic device into the space. Other types of events may also be used to trigger a scan as well...”). Regarding claim 17, modified Bradski teaches all the elements of claim 1. Bradski further teaches the method of claim 1 wherein determining the first optimal order in which to operate on the set of objects is based on a likelihood that each of the picks will change the collected information regarding the set of objects (See at least Para [0022] “In some cases, after the process has begun and one or more objects have been moved, the environment may have changed such that the original model may no longer be valid. For instance, new information may be learned about objects underneath a box after the box has been moved. In other examples, one or more objects may have changed position in an unpredicted manner after a certain object was moved by the robot. In such cases, actions dictated by the current plan based on the current model may no longer be feasible or desirable. Thus, in some examples, part or all of the environment may be scanned again and a revised 3D model may be reconstructed. From the newly reconstructed model, the plan for moving objects may be modified accordingly in order to take into account changes in the physical environment that were unaccounted for in the original model and plan.”, Para [0023] In some embodiments, one or more 2D or 3D sensors may continually capture image scans of the environment for reconstruction of the environment in real time. The sensors may be mounted on a moving robotic arm, on a moveable cart on which the robotic arm is mounted, and/or at one or more fixed locations within the environment. In other examples, the system may rescan periodically with one or more sensors after a certain period of time has passed. In additional examples, the system may be configured to rescan a portion or all of the environment after a specified event, such as a box pick up by the robotic device. In the example of a box pick up, a new scan may be taken of the area where the box was located (e.g., to detect objects or portions of objects that were previously obscured by the box). Additionally, a new scan may also be taken from the view where the box was located as well or instead by moving a camera mounted on the robotic device into the space. Other types of events may also be used to trigger a scan as well...”). Regarding claim 18, Bradski has all the elements of claim 1. Bradski further teaches the method of claim 1 wherein the first optimal order or the second optimal order (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box), the system may recapture sensor data to update portions or all of the 2D facade or 3D model (as needed by a 2D or 3D simulator respectively). In some examples, this recaptured model may be determined based in whole or in part on sensor data captured by one or more sensors located on the robot arm which capture sensor data as the robot arm moves through the physical environment. This recaptured model may then be input into the planning simulator so that the next action for the robot may be determined by the planning simulator using the actual facade or pallet pile result within the physical environment, not a simulated result.”)… However, Bradski does not explicitly spell out …is determined using a learning algorithm that processes historical information regarding the set of objects. Stubbs discloses …is determined using a learning algorithm that processes historical information regarding the set of objects (See at least Para [0070] “… For example, a target item, or characteristics thereof, may be identified, such as by optical or other sensors, in order to determine a grasping strategy for the item. The grasping strategy may be generated by the management module 215 based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. It should be noted that although some end of arm tools (720-760) are included in FIG. 7, any suitable end of arm tool or end effectors may be utilized for grasping items and transferring the items according to inventory transfer embodiments described herein…”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the method of Bradski with the teachings of Stubbs, since Stubbs teaches using a machine learning algorithm that processes historical information regarding a set of objects in order to perform repeated task with more accurate prediction and hence, receive more accurate results. Regarding Claim 22, Bradski teaches a robotics system, comprising: a robotic arm configured to operate in a defined workspace (See at least Fig 2A, Fig 2B, Fig 4 section 402, Para [0084] “As shown by block 402 of FIG. 4, method 400 may initially involve determining a virtual environment representing a physical environment containing a plurality of physical objects. The physical environment may be any of the types of environments containing movable objects described above, such as a bin of heterogeneous parts, a stack of boxes, or a conveyer belt containing moving objects…”); a processor operatively coupled to the robotic arm, in operation (See at least Para [0045] “Many or all of the functions of robotic device 100 could be controlled by control system 140 . Control system 140 may include at least one processor 142 (which could include at least one microprocessor) that executes instructions 144 stored in a non-transitory computer readable medium, such as the memory 146 . The control system 140 may also represent a plurality of computing devices that may serve to control individual components or subsystems of the robotic device 100 in a distributed fashion.”), the processor is configured to: receive a set of objects within the workspace of the robotic arm (See at least Para [0084] “As shown by block 402 of FIG. 4, method 400 may initially involve determining a virtual environment representing a physical environment containing a plurality of physical objects. The physical environment may be any of the types of environments containing movable objects described above, such as a bin of heterogeneous parts, a stack of boxes, or a conveyer belt containing moving objects. The virtual environment may contain a 2D and/or 3D representation of the physical environment using any of the types of models or virtual representations described above. In particular, the virtual environment may contain information relating to geometric shapes and/or sizes of objects.”); collect information regarding the set of objects, the information including characteristics of individual objects in the set of objects (See at least Para [0084] “As shown by block 402 of FIG. 4, method 400 may initially involve determining a virtual environment representing a physical environment containing a plurality of physical objects. The physical environment may be any of the types of environments containing movable objects described above, such as a bin of heterogeneous parts, a stack of boxes, or a conveyer belt containing moving objects. The virtual environment may contain a 2D and/or 3D representation of the physical environment using any of the types of models or virtual representations described above. In particular, the virtual environment may contain information relating to geometric shapes and/or sizes of objects.”); determine, based on the collected information and the characteristics of the individual objects, a first optimal order in which to operate on the set of objects, the first optimal order specifying at least a first pick of the first optimal order and a second pick of the first optimal order to be performed after the first pick of the first optimal order (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box),…”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”, discloses first box or several boxes for the robotic device to move to a drop-off location which is construed as after first pick, second pick is performed and so on); wherein, to determine the first optimal order, the processor is configured to: identify a plurality of candidate picks of the individual objects in the set of objects, wherein each candidate pick specifies an action of picking up a specific object at a specific location (See at least Fig 4 Section 404 “Develop a plan, based on the virtual environment, to cause a robotic manipulator to move one or more of the physical objects in the physical environment”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”, Para [0093] “In further examples, one or more sensor scans may be triggered by an unexpected measurement. For instance, according to the virtual environment, a robot arm may be commanded to pick and place a box with expected dimensions from a particular location…” )… wherein determining the first optimal order in which to operate on the set of objects includes determining a first optimal order (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box),…”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”)… wherein determining the first optimal order in which to operate on the set of objects (See at least Para [0048] “Within examples, a virtual environment including a model of the objects in 2D and/or 3D may be determined and used to develop a plan or strategy for picking up the boxes. In some examples, the robot may use one or more sensors to scan an environment containing objects, as shown in FIG. 2B. As the robotic arm 102 moves, a sensor 106 on the arm may capture sensor data about the stack of boxes 220 in order to determine shapes and/or positions of individual boxes. In additional examples, a larger picture of a 3D environment may be built up by integrating information from individual (e.g., 3D) scans. Sensors performing these scans may be placed in fixed positions, on a robotic arm, and/or in other locations. According to various embodiments, scans may be constructed and used in accordance with any or all of a number of different techniques.”, Para [0026] “The simulator may plan a first action for the robot based on the inputted model of the environment and then the robot arm may perform the action. After every action by the robot (e.g., unloading a box),…”, Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”)… cause the robotic arm to perform the first pick of the first optimal order (See at least Para [0021] “Example embodiments provide for systems and methods that allow a robotic device to move objects within an environment, such as to load or unload boxes or to construct or deconstruct pallets (e.g., from a container or truck bed). Initially, a 2D or 3D virtual environment or model may be constructed based on sensor data from a physical environment containing physical objects (e.g., a stack of boxes). From that model, a control system may determine a plan for moving some of the objects (e.g., loading or unloading boxes) using a robotic device. For instance, the plan may identify a first box or several boxes for the robotic device to move to a drop-off location using a robotic arm with a gripper.”, Fig 4 section 406); after performing the first pick of the first optimal order and before performing the second pick of the first optimal order, collect additional information regarding the set of objects, the additional information including updated characteristics of individual objects in the set of objects (See at least Para [0022] “In some cases, after the process has begun and one or more objects have been moved, the environment may have changed such that the original model may no longer be valid. For instance, new information may be learned about objects underneath a box after the box has been moved. In other examples, one or more objects may have changed position in an unpredicted manner after a certain object was moved by the robot. In such cases, actions dictated by the current plan based on the current model may no longer be feasible or desirable. Thus, in some examples, part or all of the environment may be scanned again and a revised 3D model may be reconstructed. From the newly reconstructed model, the plan for moving objects may be modified accordingly in order to take into account changes in the physical environment that were unaccounted for in the original model and plan.”, Para [0091] “As shown by block 408 of FIG. 4, method 400 may additionally involve receiving updated sensor data after the robotic manipulator performs the first action. The updated sensor data may be received from any of the types of sensors using any of the scanning methods previously described and may cover any portion of the physical environment or the entire environment. Additionally, the updated sensor data may include data captured from different viewpoints using one or more movable sensors, such as sensors mounted on a robotic arm.”); determine, based on the additional collected information and the updated characteristics of the individual objects, a second optimal order in which to operate on the set of objects, the second optimal order specifying at least a first pick of the second optimal order to be performed after the first pick of the first optimal order, wherein the second optimal order is different than a remaining portion of the first optimal order (See at least Para [0022] “In some cases, after the process has begun and one or more objects have been moved, the environment may have changed such that the original model may no longer be valid. For instance, new information may be learned about objects underneath a box after the box has been moved. In other examples, one or more objects may have changed position in an unpredicted manner after a certain object was moved by the robot. In such cases, actions dictated by the current plan based on the current model may no longer be feasible or desirable. Thus, in some examples, part or all of the environment may be scanned again and a revised 3D model may be reconstructed. From the newly reconstructed model, the plan for moving objects may be modified accordingly in order to take into account changes in the physical environment that were unaccounted for in the original model and plan.”, Para [0091] “As shown by block 408 of FIG. 4, method 400 may additionally involve receiving updated sensor data after the robotic manipulator performs the first action. The updated sensor data may be received from any of the types of sensors using any of the scanning methods previously described and may cover any portion of the physical environment or the entire environment. Additionally, the updated sensor data may include data captured from different viewpoints using one or more movable sensors, such as sensors mounted on a robotic arm.”, Para [0092], Fig 4 section 408, 410, 412, Para [0100] “…In other examples, additional modifications to the plan may result in one or more additional different robot actions. In further examples, additional sensor scans may performed after removing box 322, possibly resulting in additional modifications to the plan based on additional updated sensor data.”, Para [0088] “…In additional examples, an ordering of robot actions may be chosen in order to increase feasibility of current or future objects movements …”); and cause the robotic arm to perform the first pick of the second optimal order (See at least Fig 4 section 414 (Cause the robotic manipulator to perform a second action according to the modified plan)). However, Bradski does not explicitly spell out … identify, based on the characteristics of the individual objects, a plurality of groups of objects, wherein each group is associated with a respective tool capable of manipulating each of the objects in the group; … with a specific tool; … in which to use each of a plurality of different tools coupled to a distal end of the robotic arm based on a number of tool changes required to execute the first optimal order in which to use each of the plurality of different tools; … includes scoring and ranking the candidate picks; and wherein the scoring and ranking are based on a likelihood that each of the candidate picks will succeed; … Wade-McCue teaches … identifying, based on the characteristics of the individual objects, a plurality of groups of objects, wherein each group is associated with a respective tool capable of manipulating each of the objects in the group (See at least Page 4 Col 2 “2) Grasping Class: By combining the grasp synthesis with tool selection, it can be said that each item belongs to a grasping class, which defines the entire approach that the grasping system takes for any given item, including which grasp synthesis algorithm and which tool is used when manipulating it. Each item belongs primarily to one of the five grasping class listed below: • Surface-normals, suction • RGB-D-Centroid, suction • RGB-D-Centroid, grip • RGB-centroid, suction • RGB-centroid, grip”); … … based on a number of tool changes required to execute the first optimal order in which to use each of the plurality of different tools (Fig. 1. Top: Model of Wrist including (A) Suction tool (B) Tool change motor and (C) Parallel Jaw Gripper, Bottom: The end-effector in use during the Amazon Robotics Challenge (left) picking an item with the suction tool and another one with the gripper (right)., Page 4 Col 2 Para 4 “There are typically two options for manipulators, one being an articulated arm [23] or a Cartesian based system [24]. When utilizing an articulated arm, often a hybrid end-effector is required where both attachment types are integrated into the one end-effector [25] or a tool change mechanism can be used similar to a CNC Machine.”)… Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bradski with the teachings of Wade-McCue and include the feature of identifying, based on the characteristics of the individual objects, a plurality of groups of objects, wherein each group is associated with a respective tool capable of manipulating each of the objects in the group and tool is changed accordingly, thereby improve performance and increase redundancy (See at least Page 5 Col 2 Para “Where modularity aided in the design process, the integration of complimentary systems such as the sucker and gripper greatly improved performance and increased redundancy.”). Stubbs teaches … with a specific tool (See at least Para [0069] FIG. 7 illustrates an example mobile manipulator unit and various end of arm tools that may be utilized by the mobile manipulator unit to transfer inventory within the inventory management system, in accordance with at least one embodiment. FIG. 7 includes an example mobile manipulator unit 700 with associated robotic arms 710 and a plurality of end of arm tools 720-760 for grasping inventory for transfer as described herein. In embodiments, each robotic arm 710 may be configured to utilize a particular end of arm tool, ”, Fig 7); … … in which to use each of a plurality of different tools coupled to a distal end of the robotic arm based on a number of tool changes required to execute the first optimal order in which to use each of the plurality of different tools (See at least Para [0069] FIG. 7 illustrates an example mobile manipulator unit and various end of arm tools that may be utilized by the mobile manipulator unit to transfer inventory within the inventory management system, in accordance with at least one embodiment. FIG. 7 includes an example mobile manipulator unit 700 with associated robotic arms 710 and a plurality of end of arm tools 720-760 for grasping inventory for transfer as described herein. In embodiments, each robotic arm 710 may be configured to utilize a particular end of arm tool, a mix of end of arm tools, or a plurality of end of arm tools 720-760. The end of arm tools may allow the mobile manipulator unit to grasp inventory items that are stored in containers of the inventory holders 230 or to grasp inventory that is loosely stored without packaging in inventory holders 230. The robotic arms may facilitate the movement of inventory items and other features of the inventory management system 210 among and between components of the inventory management system 210. The end of arm tools illustrated in FIG. 7 include a mechanical pincher 720, an adaptive gripper tool 740, and a vacuum tool 760. In embodiments, various combinations of the end of arm tools 720-760 may be utilized to grasp and transfer inventory. In an embodiment, a particular end of arm tool may be combined with another end of arm tool to be utilized on a single robotic arm. For example, the mechanical pincher 720 and vacuum tool 760 may be configured to work in combination to grasp and transfer inventory.”, discloses different types end of arm tool to grasp different items and tools can be combined which is construed as tool changes according to the type of item,”, Fig 7); …includes scoring and ranking the candidate picks (See at least Para [0070] “For example, a target item, or characteristics thereof, may be identified, such as by optical or other sensors, in order to determine a grasping strategy for the item. The grasping strategy may be generated by the management module 215 based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. It should be noted that although some end of arm tools (720-760) are included in FIG. 7, any suitable end of arm tool or end effectors may be utilized for grasping items and transferring the items according to inventory transfer embodiments described herein…”, discloses information indicating grasping strategies that have been successful or unsuccessful for such items in the past which is construed as scoring and ranking the candidate picks); and … Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bradski with the teachings of Stubbs and include the feature of determining the first optimal order by identifying a plurality of candidate picks of the individual objects in the set of objects, wherein each candidate pick specifies an action of picking up a specific object at a specific location with a specific tool and scoring and ranking the candidate picks, thereby providing flexibility, precision, and accuracy in picking up and placing different types of items from various location. Holz teaches wherein the scoring and ranking are based on a likelihood that each of the candidate picks will succeed (See at least Fig 2 (Success State), Page 1463 IV. EXPERIMENTS AND RESULTS In order to assess robustness and performance of our approach, we conducted a series of experiments of both individual components and the integrated platform. As evaluation criteria, we focus on the success rates and the execution times of the individual components and the overall cycle times (for picking an object from the pallet), Fig 6 (Success Rate)); … Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the system of Bradski with the teachings of Holz, since Holz teaches that the scoring and ranking are based on a likelihood that each of the candidate picks will succeed which will lead to precise picking selection. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski) in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue), and further in view of Stallman (US 10360531 B1) (Hereinafter Stallman). Regarding claim 6, Bradski has all the elements of claim 1. However, Bradski does not explicitly spell out the method of claim 1 wherein the characteristics of the individual objects in the set of objects includes rigidities and porosities of the individual objects. Stallman discloses the method of claim 1 wherein the characteristics of the individual objects in the set of objects includes rigidities and porosities of the individual objects (See at least Col 4 Lines 36-59 “The robotic arm 115 may include or be in communication with one or more sensors (of similar or varying types) arranged to detect the item while the item is being targeted by the staging environment. The sensors may communicate detected attributes, such as weight, geometric characteristics (e.g., size, position, or orientation), electrical conductivity, magnetic properties, surface characteristics (e.g., how slippery or porous the item is), deformability, and/or structural integrity of the item.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Stallman with Bradski as the combination would provide a more efficient picking strategy based on identified characteristics of one or more objects. 25. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski) in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue), and further in view of Chitta et al. (US 2017/0246744 A1) (Hereinafter Chitta). 26. Regarding claim 12, modified Bradski has all the elements of claim 1. However, Bradski does not explicitly spell out the method of claim 1 wherein the scoring and ranking include assigning a greater likelihood of success to picks that include picking up an object near a center of a surface of the object. Holz discloses the method of claim 1 wherein the scoring and ranking include assigning a greater likelihood of success (See at least Fig 2 (Success State), Page 1463 IV. EXPERIMENTS AND RESULTS In order to assess robustness and performance of our approach, we conducted a series of experiments of both individual components and the integrated platform. As evaluation criteria, we focus on the success rates and the execution times of the individual components and the overall cycle times (for picking an object from the pallet), Fig 6 (Success Rate))… Chitta discloses … to picks that include picking up an object near a center of a surface of the object (See at least Fig 10, Para [0025] “According to some implementations, the system learns the appearance of the picked face of the box the first time it picks a box of that appearance and then, on subsequent picks, attempts to identify other boxes with a matching appearance in the pallet. This allows the system to skip the exploratory pick for a box whose appearance it has learned, and instead determine the size and location of the center of the box from the previously learned appearance of the box.”, Para [0045] “Referring to the flowchart in FIG. 3, the system now places the box back on the pallet, and repositions the Robot 28 with attached Gripper 30 to now pick the Box 12 at its Center 56 , using an orientation that will best secure the box with the gripper (e.g., orienting the Gripper 30 along the longer edge of the box as shown in FIG. 11)…” ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the method of Bradski and Holz with the teachings of Chitta wherein the likelihood of success in Holz’s teaching that is being used to more accurately pick an object, and additionally Chitta’s teaching of picking an object up near a center of a surface of the object ensures secure grasping of said object, thereby providing a more efficient and dynamic object handling system. 27. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski) in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue), and further in view of Shin et al. (H. Shin, H. Hwang, H. Yoon and S. Lee, "Integration of deep learning-based object recognition and robot manipulator for grasping objects," 2019 16th International Conference on Ubiquitous Robots (UR), Jeju, Korea (South), 2019, pp. 174-178) (Hereinafter Shin). 28. Regarding claim 13, modified Bradski has all the elements of claim 1. However, modified Bradski does not explicitly spell out the method of claim 1 wherein the scoring and ranking include assigning a greater likelihood of success to picks that include picking up an object near a center of gravity of the object. Holz discloses the method of claim 1 wherein the scoring and ranking include assigning a greater likelihood of success (See at least Fig 2 (Success State), Page 1463 IV. EXPERIMENTS AND RESULTS In order to assess robustness and performance of our approach, we conducted a series of experiments of both individual components and the integrated platform. As evaluation criteria, we focus on the success rates and the execution times of the individual components and the overall cycle times (for picking an object from the pallet), Fig 6 (Success Rate))… Shin discloses … to picks that include picking up an object near a center of gravity of the object (See at least Page 176 Col 1 “After applying the Mask R-CNN on every frame, we define positions of objects with the center of gravity in the segmented results. Assuming that all objects existed on a two-dimensional plane as a condition of experiment. Straight lines in every 30 degrees are drawn from the center point for grasping directions of objects.”, Fig 4 ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the method of Bradski and Holz with the teachings of Shin wherein the likelihood of success in Holz’s teaching that is being used to more accurately pick an object, and additionally Shin’s teachings of picking up an object near a center of gravity of the object provide a more accurate grasp of objects. 29. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski) in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue), and further in view of Ellekilde (L. . -P. Ellekilde et al., "Applying a learning framework for improving success rates in industrial bin picking," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 2012, pp. 1637-1643) (Hereinafter Ellekilde). 30. Regarding claim 14, modified Bradski has all the elements of claim 1. However, modified Bradski does not explicitly spell out the method of claim 1 wherein the scoring and ranking include assigning a lesser likelihood of success to picks that include picking up an object at a location that overlaps with other objects. Ellekilde discloses the method of claim 1 wherein the scoring and ranking include assigning a lesser likelihood of success to picks that include picking up an object at a location that overlaps with other objects (See at least Page 1637 Col 2 Last Para to page 1638 Col 1 Para 1, “Last but not least, cycle time is influenced by the average grasp success probability. Grasp failures may occur due to several reasons including conceptually incorrect or imprecise pose estimates, hindering placements of neighboring objects and the chosen grasp strategy.”, discloses Grasp failures may occur due to hindering placements of neighboring objects which is construed as lesser likelihood of success to picks is assigned to scoring and ranking that include picking up an object at a location that overlaps with other objects). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the method of Bradski and Holz with the teachings of Ellekilde which includes assigning scoring and ranking a lesser likelihood of success to picks that include picking up an object at a location that overlaps with other objects, thereby providing a more accurate object handling system. 31. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski), in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue), and further in view of Yap et al. (US 2020/0017317 A1) (Hereinafter Yap). 32. Regarding claim 15, modified Bradski has all the elements of claim 1. However, modified Bradski does not explicitly spell out the method of claim 1 wherein the scoring and ranking include assigning a lesser likelihood of success to picks that include picking up an object at a location that includes identifying information. Yap discloses the method of claim 1 wherein the scoring and ranking include assigning a lesser likelihood of success to picks that include picking up an object at a location that includes identifying information (See at least Para [0051] “In some embodiments, the method further comprises in accordance with a determination that the end effector coming into contact with the object at a region causes the barcode on the object to be occluded, a probability of zero is assigned to the region.”, discloses a probability of zero is assigned to the region when an end effector coming into contact with the object at a region causes the barcode on the object to be occluded which is construed as assigning a lesser likelihood of success to picks that include picking up an object at a location that includes identifying information). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the method of Bradski and Holz with the teachings of Yap wherein Yap teaches assigning the scoring and ranking a lesser likelihood of success to picks that include picking up an object at a location that includes identifying information in order to not occlude the barcode on the object and make the barcode easily readable in case if scanning is performed. 33. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US 2016/0089791 A1) (Hereinafter Bradski) in view of Stubbs et al. (US 2017/0166399 A1) (Hereinafter Stubbs), Holz et al. (D. Holz, A. Topalidou-Kyniazopoulou, J. Stückler and S. Behnke, "Real-time object detection, localization and verification for fast robotic depalletizing," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 1459-1466) (Hereinafter Holz), Wade-McCue et al. (Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge) (Hereinafter Wade-McCue), and further in view of Romano et al. (US 2019/0217471 A1) (Hereinafter Romano). 34. Regarding Claim 21, modified Bradski teaches all the elements of claim 1. However, Bradski does not explicitly spell out the method of claim 1, wherein determining the first optimal order in which to operate on the set of objects is based on a number of tool changes needed to execute the plurality of candidate picks. Romano teaches the method of claim 1, wherein determining the first optimal order in which to operate on the set of objects is based on a number of tool changes needed to execute the plurality of candidate picks (See at least Fig 1, Fig 2, Fig 6, Fig 14, Para [0081] “The system may further seek to identify all objects in a bin 260, may associate each with an optimal vacuum cup, and may then seek to grasp, one at a time, each of the objects associated with a common vacuum cup prior to changing the vacuum cup on the end effector. In each of these embodiments, the system itself identifies the need to change acquisition units, and then changes acquisition units by itself in the normal course of operation.”, Para [0077] “FIGS. 32 and 33 show an end effector 140 in accordance with a further embodiment of the invention that may be used interchangeably with the acquisition units 72 , 74 , 76 discussed above to provide accommodation of the end effector…”, Para [0080] “…In certain embodiments, the perception unit 268 may sufficiently identify a next object, and if the vacuum cup on the end effector needs to be changed, the system may exchange a current vacuum cup to a desired one that is known to be a better acquisition unit for grasping the identified object in bin 260.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Bradski with the teachings of Romano as the combination would provide a more efficient picking strategy based on identified characteristics of one or more objects. Conclusion 35. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Shekhawat et al. (US 10399778 B1) teaches identification and planning system for picking inventory items 36. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 37. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHEDA HOQUE whose telephone number is (571)270-5310. The examiner can normally be reached Monday-Friday 8:00 am- 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHEDA HOQUE/ Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jul 16, 2020
Application Filed
Feb 21, 2024
Non-Final Rejection — §103
Aug 28, 2024
Response Filed
Oct 27, 2024
Final Rejection — §103
Apr 25, 2025
Request for Continued Examination
Apr 30, 2025
Response after Non-Final Action
May 21, 2025
Non-Final Rejection — §103
Oct 29, 2025
Response Filed
Jan 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569992
AUTOMATIC DETERMINATION OF ROBOT SETTLING STATES
2y 5m to grant Granted Mar 10, 2026
Patent 12539597
ROBOT SYSTEM, AND CONTROL METHOD FOR SAME
2y 5m to grant Granted Feb 03, 2026
Patent 12514143
AGRICULTURAL MACHINE, AGRICULTURAL WORK ASSISTANCE APPARATUS, AND AGRICULTURAL WORK ASSISTANCE SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12485538
METHOD AND SYSTEM FOR DETERMINING A WORKPIECE LOADING LOCATION IN A CNC MACHINE WITH A ROBOTIC ARM
2y 5m to grant Granted Dec 02, 2025
Patent 12479107
METHOD AND AN ASSEMBLY UNIT FOR PERFORMING ASSEMBLING OPERATIONS
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
43%
Grant Probability
81%
With Interview (+37.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 58 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month