Prosecution Insights
Last updated: April 19, 2026
Application No. 17/978,610

CONTROLLING MULTIPLE ROBOTS TO COOPERATIVELY PICK AND PLACE ITEMS

Final Rejection §103§DP
Filed
Nov 01, 2022
Examiner
ESTEVEZ, DAIRON
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Dexterity Inc.
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
51%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
43 granted / 64 resolved
+15.2% vs TC avg
Minimal -16% lift
Without
With
+-15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
28 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 64 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 10/23/2025 has been entered. Claims 1-2, 5-14, 16-17, and 19-21 remain pending in the application. Applicant’s amendments to the claims have overcome each and every rejection under 35 U.S.C. 112(b) previously set forth in the Non-Final Office Action mailed 7/9/2025. Response to Arguments Applicant argues that the amended limitations involving a determination that “the target object is too large or too heavy to be manipulated singly by either the first robotic arm or the second robotic arm and concluding based on said determination that both the first robotic arm and the second robotic arm will be required to work cooperatively to pick and place the target object” overcome the rejections of claims 1, 17, and 20. Applicant's arguments, however, with respect to independent claim(s) 1, 17, and 20 and the associated dependent claims--- excluding claim 21--- have been considered but are moot because the arguments do not apply to the combination of references and/or rationale being used in the current rejection. Regarding claim 21, specifically, Applicant argues that the work schedule algorithm of Nunes “is not the same as a robot itself receiving a request to cooperate with another robot, determining it is a higher priority task, and delaying working cooperatively with the requesting robot until after the higher priority task has been completed”. Regarding this argument, broadly stating that the work schedule algorithm is “not the same” as the claim language is not persuasive. A more detailed interpretation of the Nunes reference is required to clarify how the claim limitation is being met. To decompose the claim elements, Claim 21 establishes that the second robotic arm receives a request to perform a cooperative task, performs a determination about a task having higher priority than the cooperative task, and delays working cooperatively with the first robotic arm until after the higher priority task has been completed. The references utilized for claim 1 already establish a system wherein a first robotic arm requests assistance from a second robotic arm to perform a collaborative task, and commonly in these references the second robotic arm acts as a follower or performs minimal processing to agree to perform the task. Therefore, the claim is establishing over the prior art of claim 1 that a second robotic arm performs a local determination to manage its own tasks, specifically in view of ranking one task as “higher priority” than another. Nunes discloses a system for “cooperative robots” and in the Abstract establishes that the work is dedicated to an auction algorithm wherein “a robot takes into account its own current commitments and an optimization objective, which is to minimize the time of completion of the last task alone or in combination with minimizing the distance traveled”. In other words, the “makespan” based algorithm is a system for determining which tasks are of higher priority to perform over other tasks based on temporal constraints, as defined in the “Problem Definition” section. More specifically, “each robot computes the cost of performing each task according to its private schedule.” Ultimately, the cited references perform more detailed cooperative robot maneuvers, and specifically teach a second robot receiving a request to work cooperatively with a first robot. Nunes bridges the gap of allowing the second robot to make a determination about which task to complete first, and it would have been obvious to one of ordinary skill in the art to have modified the cooperative pick and place with goal based learning of Natarajan, Hallock, and Rodriguez Garcia with the dynamic task allocation strategy of Nunes in order to optimally allocate tasks to cooperative robots in view of time constraints and availability considerations. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 7, 10, 17, and 20 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 16 and 20 of co-pending Application No. 17/978,608 in view of Rodriguez Garcia et al., hereinafter Rodriguez Garcia (Document ID: US 20210146532 A1). This is a provisional nonstatutory double patenting rejection. Regarding claims 1, 17, and 20, co-pending Application No. 17/978,608 teaches a robotic system, a method, and a computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for a first robotic arm having a first end effector (claim 1: “a first robotic arm”, and claim 12: “first end effector”); a second robotic arm having a second end effector (claim 1: “a second robotic arm” and claim 12: “second end effector”); and a control computer configured to use the first robotic arm and the second robotic arm to pick and place a plurality of objects, including by using the first robotic arm and the second robotic arm to work cooperatively to pick and place one or more of the objects, including by (claim 1: “control computer configured to use the first robotic arm and the second robotic arm and image data received from the camera to load or unload objects into or from the truck or other container… using the first robotic arm and the second robotic arm cooperatively”): determining an attribute of a target object based at least in part on image data received from one or more cameras (claim 1: “determining based on said image data received from the camera that a selected object is too large or heavy to move with a single robotic arm”, wherein the attribute of the object involves its size or weight); and determining based at least in part on the attribute that the target object is too large or too heavy to be manipulated singly by either the first robotic arm or the second robotic arm and concluding based on said determination that both the first robotic arm and the second robotic arm will be required to work cooperatively to pick and place the target object (claim 1: “determining based on said image data received from the camera that a selected object is too large or heavy to move with a single robotic arm; and in response to the determination that selected object is too large or heavy to move with a single robotic arm, using the first robotic arm and the second robotic arm cooperatively to each grasp the selected object and move the selected object together”); But co-pending Application No. 17/978,608 does not explicitly teach wherein the control computer is configured to use a force sensor or other tactile feedback to use the first robotic arm or the second robotic arm to grasp by feel an object at a location that is not visible to the control computer, including by using tactile perception to blindly explore the back side of the object to find a stable and collision-free pick location. Instead, Rodriguez Garcia, whose invention pertains to tactile dexterity and control, teaches in P [0023] that “vision-based systems may be susceptible to occlusion events, in which the field of view of a camera is obscured, and the ability of the camera to properly sense an object of interest is interrupted.” To remedy this issue, the system of Rodriguez Garcia is configured in P [0026] to perform “manipulation… based on planning for robot/object interactions that render interpretable tactile information for control.” In P [0029] disclosure is provided for “transforming the object from one stable placement to another”. Finally, see as well P [0068] wherein an experiment is conducted with the robotic system to move an object and “perform a pivot maneuver with well-defined inverse kinematics and that avoided collisions with the environment.” It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the collaborative robot grippers with image detection of co-pending Application No. 17/978,608 with the tactile manipulation based strategies of Rodriguez Garcia in order to grasp an object relying on tactile information when a vision-based system encounters an occlusion event. Regarding claim 2, modified co-pending Application No. 17/978,608 teaches the system of claim 1, and co-pending Application No. 17/978,608 further teaches a camera positioned to generate the image data associated with a workspace in which the first robotic arm and second robotic arm are located. (claim 1: “a camera” wherein the workspace includes the truck loading environment) Regarding claim 7, modified co-pending Application No. 17/978,608 teaches the system of claim 1, and co-pending Application No. 17/978,608 further teaches that to perform a given task to cooperatively pick and place a given item, the first robotic arm is operated in a leader mode and the second robotic arm is operated in a follower mode (claim 2: “operate the first robotic arm in a leader mode of operation and a second process to operate the second robotic arm in a follower mode of operation to cooperatively to grasp and move the one or more objects along a trajectory.”). Regarding claim 10, modified co-pending Application No. 17/978,608 teaches the system of claim 1, and co-pending Application No. 17/978,608 further teaches that the first robotic arm and the second robotic arm are configured to independently pick and place objects when not working cooperatively to pick and place said one or more of the objects (claim 3: “the first robotic arm and the second robotic arm are configured to be used independently to load or unload objects when not being used to cooperatively to grasp and move the one or more objects along a trajectory.”). Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-2, 5-8, 10, 14, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natarajan et al., hereinafter Natarajan (Document ID: US 20210229281 A1) in view of Hallock et al., hereinafter Hallock (Document ID: US 11426864 B2), and further in view of Rodriguez Garcia et al., hereinafter Rodriguez Garcia (Document ID: US 20210146532 A1). Regarding claims 1, 17, and 20, Natarajan teaches a robotic system, a method, and a computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for a first robotic arm having a first end effector (first robot 602, see also FIG. 4 wherein each robot is shown to have an end effector for various tasks); a second robotic arm having a second end effector (second robot 604, see also FIG. 4 wherein each robot is shown to have an end effector for various tasks); and a control computer (compute node 900) configured to use the first robotic arm and the second robotic arm to pick and place a plurality of objects, including by using the first robotic arm and the second robotic arm to work cooperatively to pick and place one or more of the objects (see at least FIG. 4 as well as P [0032] and P [0042]: “complex collaborative maneuvers”), including by: determining an attribute of a target object based at least in part on image data received from one or more cameras (see at least P [0048]: “complete system state 114 may be collected from one or multiple robots and environmental sensors (e.g., multiple cameras) in real-time” and the Complete System state includes an attribute of a target object “state of objects being manipulated (e.g., type, location and pose of objects being handled”); and Natarajan further teaches in P [0053] coordination learning involving a determination that both the first robotic arm and the second robotic arm will be required to work cooperatively to pick and place the target object. And Natarajan additionally teaches in P [0049] that “Based on the system state and desired goal of the collaborative task, a Coordination Algorithm (e.g., 118 of FIG. 1C) may determine which action or interaction primitives are to be executed by each robot or a particular robot at each time step to accomplish the desired collaborative task or sub-task.” But Natarajan is not explicitly teaching a determination partially based on the attribute that the target object is too large or too heavy to be manipulated singly by either the first robotic arm or the second robotic arm, and thus requiring the robotic arms to work cooperatively to pick and place the object. Instead, Hallock, whose invention pertains to utilizing multiple robotic manipulators in a collaborative environment, teaches in at least Col 4, Line 50 that “sensors in the primary robot manipulator, such as strain gauges, pressure transducers, cameras, and the like can determine if the target object is too heavy to be grasped or positioned by the primary robot manipulator.” It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the Coordination Algorithm with system state determination of interaction primitives of Natarajan with the determination that additional help is needed based on a weight of a target object of Hallock in order to provide a robotic grasp assistance system between two robotic arms, allowing for secure and accurate grasps as in Col 13, Line 6 of Hallock. Natarajan further teaches collision avoidance in at least P [0033], as well as measuring force feedback in at least P [0045], and Hallock teaches in at least Col 11, Line 32 the use of many sensor types such as contact sensors or force sensors to provide environmental information to the robotic manipulator. But Natarajan and Hallock do not explicitly teach that the control computer is configured to use a force sensor or other tactile feedback to use the first robotic arm or the second robotic arm to grasp by feel an object at a location that is not visible to the control computer, including by using tactile perception to blindly explore the back side of the object to find a stable and collision-free pick location. Instead, Rodriguez Garcia, whose invention pertains to tactile dexterity and control, teaches in P [0023] that “vision-based systems may be susceptible to occlusion events, in which the field of view of a camera is obscured, and the ability of the camera to properly sense an object of interest is interrupted.” To remedy this issue, the system of Rodriguez Garcia is configured in P [0026] to perform “manipulation… based on planning for robot/object interactions that render interpretable tactile information for control.” In P [0029] disclosure is provided for “transforming the object from one stable placement to another”. Finally, see as well P [0068] wherein an experiment is conducted with the robotic system to move an object and “perform a pivot maneuver with well-defined inverse kinematics and that avoided collisions with the environment.” It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the collaborative multi-functional robot grippers with force and contact sensing of Natarajan and Hallock with the tactile manipulation based strategies of Rodriguez Garcia in order to grasp an object relying on tactile information when a vision-based system encounters an occlusion event. Regarding claim 2, modified Natarajan teaches the system of claim 1, and Natarajan further teaches a camera positioned to generate the image data associated with a workspace in which the first robotic arm and second robotic arm are located. (see at least P [0048] wherein a “Complete System state” is determined, and includes the “state of environment” as well as “environmental sensors (e.g., multiple cameras)”) Regarding claim 5, modified Natarajan teaches the system of claim 1, and Natarajan further teaches the control computer includes two or more processors distributed at two or more nodes. (see at least P [0062]: “In further examples, any of the compute nodes or devices discussed with reference to the present edge computing systems and environment may be fulfilled based on the components depicted in FIGS. 9A and 9B.”) Regarding claim 6, modified Natarajan teaches the system of claim 1, and Natarajan further teaches the control computer implements a hierarchical planner that includes an individual robot controller for each of the first robotic arm and the second robotic arm and a higher-level controller configured to coordinate operation of the first robotic arm in cooperation with the second robotic arm to cooperatively pick and place said one or more of the objects. (see at least P [0096] which identifies “a device including processing circuitry and memory” which acts as a higher-level controller configured to command the robots to perform collaborative tasks. There is also an indication of a training model that allows the robots to “act independently” from “non-robotics devices”, indicating that each robot also has its own individual controller) Regarding claims 7 and 19, modified Natarajan teaches the system of claim 1 as well as the method of claim 17, and Natarajan further teaches in at least P [0042] that to perform a given task to cooperatively pick and place a given item, a “first robot 602 first learns to pick up a cuboidal object” and that a second robot 604 follows the lead of the second robot, but Natarajan and Rodriguez Garcia do not explicitly teach that the first robotic arm is operated in a leader mode and the second robotic arm is operated in a follower mode. Instead, Hallock teaches in at least Col 4, Line 48 “the secondary robot manipulator can be controlled via signals from the primary robot manipulator.” The robots can have a leader and follower mode including “a master and slave or parent and child relationship.” It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the collaborative robot maneuvers of Natarajan and Rodriguez Garcia with the leader and follower control method of Hallock in order to address scenarios where a single primary leader robot cannot perform a successful grasp on its own, as in Col 4, Line 53 of Hallock. Regarding claim 8, modified Natarajan teaches the system of claim 7, and Natarajan further teaches the first robotic arm is configured, when in the leader mode, to independently plan a trajectory to move a given object to be cooperatively picked and placed, grasp the given object, and move the given object along the planned trajectory. (see at least “Corner Pick policy” in P [0042] which is a planned trajectory and gripping location on the object, as well as P [0043]: “the first robot 602 may learn to complete its sub-goal without involvement of the second robot 604”. The operation of “in the leader mode” is taught in combination with Hallock in claim 7 above.) Regarding claim 10, modified Natarajan teaches the system of claim 1, and Natarajan further teaches the first robotic arm and the second robotic arm are configured to independently pick and place objects when not working cooperatively to pick and place said one or more of the objects. (see at least P [0104]: “While the technique 1000 is described with respect to a collaborative task, other implementations may include using a set of action primitives to program a set of robots that are not performing a collaborative task, but which still may have similar actions that can be programmed and coordinated using the set of action primitives” See also FIG. 4 wherein “pick and place” is one such action primitive) Regarding claim 14, modified Natarajan teaches the system of claim 1, and Natarajan further teaches in P [0104] that “other implementations may include using a set of action primitives to program a set of robots that are not performing a collaborative task, but which still may have similar actions that can be programmed and coordinated using the set of action primitives” when not using the first robotic arm and the second robotic arm to work cooperatively to pick and place a given object. Hallock additionally teaches in Col 5, Line 1 that “the system can switch the roles of each robot manipulator”. But Natarajan, Hallock, and Rodriguez Garcia do not explicitly teach that the control computer is configured to use the first robotic arm and the second robotic arm to alternate in picking and placing objects included in the plurality of objects. Instead, it can be seen in at least FIG. 4 that the first and second robots may share a joint workspace, and it would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the cooperative workspace of Natarajan, Hallock, and Rodriguez Garcia with the alternate picking and placing tasks when using the robots independently in order to avoid collisions within a shared environment. See also P [0033] which establishes “collision avoidance” as an individual interaction primitive. Regarding claim 16, modified Natarajan teaches the system of claim 1, and Natarajan further teaches the control computer is configured to ensure the first robotic arm and the second robotic arm do not collide with each other or with obstacles in the workspace when performing pick and place tasks. (see at least P [0053]: “coordination learning to avoid collision between robotic arms during a collaborative assembly task”) Regarding claim 18, modified Natarajan teaches the method of claim 17, and Natarajan further teaches using image data from a camera to determine to use the first robotic arm and the second robotic arm to work cooperatively to pick and place a given object. (see at least P [0048] wherein a “Complete System state” is determined, and includes the “state of environment” as well as “environmental sensors (e.g., multiple cameras)”. See also P [0049]: “Based on the system state and desired goal of the collaborative task, a Coordination Algorithm (e.g., 118 of FIG. 1C) may determine which action or interaction primitives are to be executed by each robot or a particular robot at each time step to accomplish the desired collaborative task or sub-task”) Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natarajan in view of Hallock and Rodriguez Garcia, and further in view of Okamoto (Document ID: US 20180161979 A1). Regarding claim 9, modified Natarajan teaches the system of claim 8, and Natarajan further teaches in at least P [0043] that the second robot may be moved to a receiving location where it grips the object, and that its coordinates are used to assist the first robot. In this way the second robot is using the “receiving location” to inform the first robotic arm that the second robotic arm is ready to move the given object. Hallock also teaches in at least Col 15, Line 6 methods for the second robotic manipulator to provide assistance in collaborative maneuvers. But Natarajan, Hallock, and Rodriguez Garcia do not explicitly teach that the second robot is the one to compute a transform based at least in part on a position and orientation of the first end effector. Instead, Okamoto teaches in at least P [0024] that “the position/orientation of following robot 14 in the coordinated control is taught with respect to first tool coordinate system 28 fixed (defined) to the final axis (or end axis) of leading robot 12”. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the leading behavior of the first robot of Natarajan, Hallock, and Rodriguez Garcia with the homogeneous conversion matrix and follower robot transform of Okamoto in order to address any grip misalignment during a collaborative grip maneuver. Claim(s) 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natarajan in view of Hallock and Rodriguez Garcia, and further in view of Winkler (Document ID: DE 102016107268 A1). Regarding claim 11, modified Natarajan teaches the system of claim 1, but Natarajan, Hallock, and Rodriguez Garcia do not teach that the control computer is configured to move one or more of the plurality of objects each from a starting position that at least partly obstructs a line of sight between a camera or other sensor and a given object to be picked and placed cooperatively, based at least in part on a determination that a less obstructed view of the given object is needed to perform cooperatively a pick and place task with respect to the given object. Instead, Winkler, whose invention pertains to a robotic system that uses multiple arms to perform complex picking tasks, teaches in at least P [0043] the use of a third arm to “grab the unnecessary article 22-2 and move this article 22-2 to the side” after determining that “the desired article 22-1 is obscured by the unnecessary article”. This triggers the “corresponding situation recognition” now that the article 22-1 has a visible path open. In this case, the objects obstructing the desired article are in their respective starting positions. Note additionally that P [0037] establishes the use of exemplary sensors, such as a camera, which is capable of “determining whether additional measures need to be taken to successfully complete the complex picking task”. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the cooperative robot system of Natarajan, Hallock, Rodriguez Garcia with the obstruction removal of Winkler in order to accomplish a complex picking task wherein an obstructed object must be uncovered as an additional measure. (see Winkler P [0006]). Regarding claim 12, modified Natarajan teaches the system of claim 1, and Natarajan further teaches in FIG. 4 the ability to push or pull an object, as well as sorting an object, but Natarajan, Hallock, Rodriguez Garcia do not teach that the control computer is configured to use one or both of the first robotic arm and the second robotic arm to pull or push a given object into a field of view of a camera or other sensor to facilitate a task to use the first robotic arm and the second robotic arm to cooperatively pick and place the given object. Instead, Winkler teaches a plurality of operations in P [0011], including rearranging or relocating objects, which is essential for tasks such as those in P [0015] wherein there may be a “a chaotic distribution of items in containers”, and then “selecting the right item, removing unwanted items, opening the container, removing any outer packaging, and the like”. Winkler then teaches in at least P [0053] a method for handling a situation where a desired article 22-1is obscured by other articles, and “initially not visible at all to the detection unit 28 and/or the sensors 68”. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the cooperative robots of Natarajan, Hallock, and Rodriguez Garcia with the multi-functional arms of Winkler in order to accomplish a complex picking task that requires uncovering an obstructed object (see Winkler P [0053]). Note that the system of Winkler uses multiple arms on a single robotic unit, but the system could be reasonably applied to two separate robot arms as in the system of Natarajan to accomplish the task of uncovering an obstructed object. Natarajan, Hallock, Rodriguez Garcia, and Winkler do not explicitly teach the system may pull or push a given object into a field of view of a camera or other sensor. But Winkler does teach in P [0044] that it is possible “the objects 18 move unexpectedly relative to each other” as multiple objects are moved in the container, in which case “, it is necessary to carry out further and/or different manipulations depending on the situation in order to successfully achieve the desired actual objective of the complex picking task 76”. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the bin manipulation and rearrangement of Natarajan, Hallock, Rodriguez Garcia, and Winkler with a specific motion to pull or push a given object into a camera’s field of view in order to address unpredicted movements during the execution of a complex picking task while “image data generated by the detection unit 28 [is] evaluated and converted into corresponding manipulations by the arms 20” (Winkler [0044]). Regarding claim 13, modified Natarajan teaches the system of claim 1, and Natarajan teaches in FIG.4 the ability to push or pull an object, as well as sorting an object, but Natarajan, Hallock, and Rodriguez Garcia do not teach that the control computer is configured to use one or both of the first robotic arm and the second robotic arm to move one or more of the plurality of objects out of the way to enable one or both of the first robotic arm and the second robotic arm to be used to grasp a given object. Instead Winkler teaches in at least P [0043] the use of a third arm to “grab the unnecessary article 22-2 and move this article 22-2 to the side” after determining that “the desired article 22-1 is obscured by the unnecessary article”. Then “the arm 20-4 is moved to the position of the object 22-1” to grasp the desired object. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the cooperative robots of Natarajan, Hallock, and Rodriguez Garcia with the multi-functional arms of Winkler in order to accomplish a complex picking task that requires simultaneous gripping and pushing aside an obstructed object (see Winkler P [0006]). Note that the system of Winkler uses multiple arms on a single robotic unit, but the system could be reasonably applied to two separate robot arms as in the system of Natarajan to accomplish the task of removing an object that obscures the target object. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natarajan in view of Hallock and Rodriguez Garcia, and further in view of Nunes et al., hereinafter Nunes (NPL Reference: Multi-Robot Auctions for Allocation of Tasks with Temporal Constraint). Regarding claim 21, modified Natarajan teaches the system of claim 1, and Natarajan further teaches in P [0050] the use of a sub-goal and goal vector architecture for individually controlling each robot, and Hallock teaches independently operable collaborative robotic manipulators wherein the second robotic arm is configured to receive a request to work cooperatively with the first robotic arm to pick and place the target object. But Natarajan, Hallock, and Rodriguez Garcia do not explicitly teach that the second robotic arm is configured to receive a request to work cooperatively with the first robotic arm to pick and place the target object; determine that a current or planned task is of higher priority than the task to work cooperatively with the first robotic arm to pick and place the target object; and delay working cooperatively with the first robotic arm to pick and place the target object until after the higher priority task has been completed. Instead, Nunes, whose work pertains to an auction algorithm for allocating tasks that have temporal constraints to cooperative robots, teaches beginning on Page 2112 under the section heading “Managing Schedules” a procedure for allowing multiple collaborative robots to be considered to perform a new task as it arises. Crucial to the disclosure of Nunes in the “Problem Definition” section is a case by case evaluation about the priority of each individual robot’s currently allocated tasks and the associated tradeoffs with accepting a new collaborative task. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the cooperative pick and place with goal based learning of Natarajan, Hallock, and Rodriguez Garcia with the dynamic task allocation strategy of Nunes in order to optimally allocate tasks to cooperative robots in view of time constraints and availability considerations. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Additional art made of record and not relied upon is considered pertinent to applicant's disclosure. Document ID: US 20210107151 A1 Invention pertains to an environment with multiple robotic machines that coordinate and collaborate. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dairon Estevez whose telephone number is (703)756-4552. The examiner can normally be reached M-R 6:30AM - 4:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.E./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Sep 16, 2024
Non-Final Rejection — §103, §DP
Dec 12, 2024
Response Filed
Jan 23, 2025
Final Rejection — §103, §DP
May 29, 2025
Request for Continued Examination
Jun 03, 2025
Response after Non-Final Action
Jul 07, 2025
Non-Final Rejection — §103, §DP
Oct 23, 2025
Response Filed
Dec 19, 2025
Final Rejection — §103, §DP
Mar 30, 2026
Request for Continued Examination
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594681
EXTERNAL ROBOT STAND AND EXTERNAL ROBOT SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12590806
SYSTEM-LEVEL OPTIMIZATION AND MODE SUGGESTION PLATFORM FOR TRANSPORTATION TRIPS
2y 5m to grant Granted Mar 31, 2026
Patent 12569997
METHOD OF GENERATING ROBOT PATH AND COMPUTING DEVICE FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12559139
AUTONOMOUS DRIVING CONTROL APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555467
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND MOBILE DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
51%
With Interview (-15.9%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 64 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month