Prosecution Insights
Last updated: April 19, 2026
Application No. 18/970,333

MOBILE ROBOTIC PROCESSING STATION, PROCESSING SYSTEM, AND METHOD THEREFOR

Non-Final OA §103
Filed
Dec 05, 2024
Examiner
O'MALLEY, JOHN MARTIN
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Highres Biosolutions Inc.
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-18.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
70.7%
+30.7% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims The following claims have been rejected or allowed for the following reasons: Claim(s) 1-35 is rejected under 35 USC § 103 Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 63/607,405, filed on 12/7/23. Information Disclosure Statement The information disclosure statement/statements (IDS) were filed on 5/7/25. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-2, 4-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Gilchrist (US 20220001540 A1), in further view of (Dietz US 11623339 B1). Regarding claim 1 Gilchrist teaches A collaborative robot for a collaborative process facility, the robot comprising: a movable base configured so as to movably position the collaborative robot at different variable work locations in the collaborative process facility, at least one of which work locations has different variable work location characteristics; (Gilchrist [0031] reads “The autonomous drive section 550 is connected to the frame 501F and is configured to traverse (e.g., move) the carriage 501 effecting vehicle travel on and across a facility floor 180 (see, e.g., FIGS. 1A and 1C), on which the at least one processing station 11110, 11120 (FIG. 1C) and/or automated system(s) 170 (FIGS. 1A and 1B) are disposed for processing laboratory samples and/or sample holders. An autonomous navigation section 551 of the auto-navigating robotic processing vehicle 500 is communicably connected to the autonomous drive section 550 so as to effect autonomous navigation vehicle travel with the autonomous drive section 550 on the facility floor 180. The autonomous navigation section 551 may include any suitable sensors (e.g., line following, inertial navigation, GPS, stereoscopic vision sensors, etc.) and/or programming so that the auto-navigating robotic processing vehicle 500 moves along the facility floor 180 and interfaces with a human 199 (FIG. 1C) and/or a processing module 11151-11155 of a processing station 11110, 11120 (FIG. 1C), or a tool formed by automated system 170 (FIGS. 1A and 1B).”); an articulated robot actuator, based on the movable base and having a robot end effector, having a motion, driven by a drive section, with at least one degree of freedom relative to the movable base (Gilchrist [0020] reads “For example, the collaborative robot may include an articulated arm (such as one or more of articulated arms 120, 172, 422, 510A, 510A′ described herein) to which the radar sensors are mounted. The articulated arm, in in some aspects, is mounted to and borne by a relocatable cart, such as those described herein, that form interchangeable stations with “plug and play” interfaces at different selectable workstations; and in other aspects the articulated arm may form a portion of a variably configurable system with selectably variable emergent stations as described herein.”); to effect with the robot end effector a predetermined function corresponding to at least one workstation, from more than one different interchangeable workstation, at the at least one work location, (Gilchrist [0039] reads “In this aspect, the robot arm 510A′ may be a different type of arm than the robot arm 510A of the auto-navigating robotic processing vehicle 500 such that the arms may provide a different number of degrees of freedom and/or a different type of articulated arm movement to effect the processes or preprocess conditions at the at least one processing station 11110, 11120. In other aspects, the robot arm of the auto-navigating robotic processing vehicle 600 may be the same arm as the arm 510A.”); the at least one workstation having an undeterministic variable pose with respect to the at least one work location; (Gilchrist [0057] reads “In one aspect, such as where the articulated arm 1532 is cart borne, the at least one axis of motion moves at least a portion of the articulated arm portions in the collaborative space (e.g., that corresponds to a selectably variable cart location of the cart-borne articulated arm). In the aspects, of the present disclosure, the motion of the robot arm 1532 is from a first location, in which the robot arm has a first shape (see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D), to another different location of at least the portion of the robot arm 1532 in the collaborative space SPC, in which the robot arm has another different shape (again see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D). The shape of at least the portion of the robot arm 1532 at the first location is different than the another different shape of the robot arm 1532 at the other different location.”); Gilchrist does not teach a vision system connected to the articulated robot actuator and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; and a controller operably connected to the articulated robot actuator and communicably connected to the vision system to register image data from the vision system of the vision target, the controller being configured so as to determine from the image data the workstation pose relative to the movable base and automatically teach the articulated robot actuator the workstation pose so as to effect a predetermined deterministic interface, associated with the predetermined function characteristic, between the workstation and robot end effector. Dietz in analogous art, teaches a vision system connected to the articulated robot actuator (Dietz [0037] reads “A first robotic arm 220 may be disposed on the first upper surface 232. The first robotic arm 220 may include any number of different interchangeable tool attachments 222, such as a picking assembly configured to grasp objects, a camera, other sensors, other suction and/or mechanical tools, or another tool attachment. The first robotic arm 220 may include, or may be coupled to, a torque resistance sensor 212 to improve safety and/or performance of the first robotic arm 220.”); and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); and a controller operably connected to the articulated robot actuator and communicably connected to the vision system to register image data from the vision system of the vision target, the controller being configured so as to determine from the image data the workstation pose relative to the movable base (Dietz [0049] reads “The portable robotic assembly 300 may include one or more computer systems or controllers configured to control operation of the first robotic arm and the camera assembly. For example, a user may select a selectable option that causes the controller to initiate corresponding actions, such as one or more of: item picking, item sorting, item induction, and truck loading. The controller may be configured to: determine that the item sorting selectable option was selected, cause the boom to position the camera assembly to image a group of items, determine a first item of the group of items, and cause the first robotic arm to grasp the first item and move the first item from a first location to a second location.”); and automatically teach the articulated robot actuator the workstation pose so as to effect a predetermined deterministic interface, associated with the predetermined function characteristic, between the workstation and robot end effector. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist with that of Dietz to include a system that would allow the robot manipulator to understand its working location based on information given by a known target. This would allow the system to better handle a variety of different tasks, which would make the system more effective and useable. (Dietz [0015] reads “(15) Accordingly, at various stages and/or locations within a fulfillment center, different tasks may need to be performed, each requiring different skills. Humans may be able to perform each of the different tasks, whereas robots may need specific equipment or may not be able to move within a fulfillment center so as to reach the different locations at which the different tasks are to be performed. However, robots may be able to assist humans in performing certain tasks, or may automate certain tasks, thereby increasing human effort efficiency and/or allowing human effort to be focused on higher cognition tasks. In order to assist humans and/or autonomously perform tasks, robots need to be in the correct location. Some robots may have limited movement ability and/or may be permanently fixed at a location due to power requirements (e.g., specific power needs or connection types, etc.), due to size or footprint (e.g., the robot is too large or heavy to move, etc.), due to stability (e.g., the robot is bolted to the floor for stability, etc.), and so forth. As a result, using the same robot to perform different tasks at different locations may be desired.”); Regarding claim 2 Gilchrist/Dietz teaches The collaborative robot of claim 1, wherein the controller is configured to identify the predetermined function characteristic of the at least one workstation from the image data and automatically initialize, from different predetermined robot automatic configurations, a predetermined robot automatic configuration associated with and responsive to the identified function characteristic. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 4 Gilchrist/Dietz teaches The collaborative robot of claim 3, wherein the predetermined parameters describe at least one of type, size and pose/orientation of an article holding station of the at least one workstation to and from which the articulated robot actuator transports, pick and places the article with the robot end effector. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 5 Gilchrist/Dietz teaches The collaborative robot of claim 2, wherein the initialized predetermined robot automatic configuration defines a robot configuration commensurate with the identified predetermined function characteristic of the at least one workstation, the robot configuration including at least one of motion characteristics and commands. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.” And [0070] reads “The machine readable code 696 may cause the controller to initiate an action or operational mode at the portable robotic manipulation assembly. For example, if the operational mode is a picking mode, the controller may be configured to cause the first robotic arm to move a first object, and to cause the moveable camera assembly 600 to capture an image of a second object to be moved while the first object is being moved.” It would be appreciated by one with ordinary skill in the art that a pick or a pack station would require the robotic system to preform different motions or commands.); Regarding claim 6 Gilchrist/Dietz teaches The collaborative robot of claim 2, wherein the predetermined robot automatic configuration is pre-stored in a memory of the controller, or downloaded to the controller upon determination of the workstation pose and the identity of the predetermined function characteristic of the at least one workstation. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.” And [0093] reads “The data storage 920 may store computer-executable code, instructions, or the like that may be loadable into the memory 904 and executable by the processor(s) 902 to cause the processor(s) 902 to perform or initiate various operations. The data storage 920 may additionally store data that may be copied to the memory 904 for use by the processor(s) 902 during the execution of the computer-executable instructions.”); Regarding claim 7 Gilchrist/Dietz teaches The collaborative robot of claim 1, wherein the movable base has an undeterministic pose at each of the different variable work locations. (Gilchrist [0057] reads “In one aspect, such as where the articulated arm 1532 is cart borne, the at least one axis of motion moves at least a portion of the articulated arm portions in the collaborative space (e.g., that corresponds to a selectably variable cart location of the cart-borne articulated arm). In the aspects, of the present disclosure, the motion of the robot arm 1532 is from a first location, in which the robot arm has a first shape (see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D), to another different location of at least the portion of the robot arm 1532 in the collaborative space SPC, in which the robot arm has another different shape (again see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D). The shape of at least the portion of the robot arm 1532 at the first location is different than the another different shape of the robot arm 1532 at the other different location.”); Regarding claim 8 Gilchrist/Dietz teaches The collaborative robot of claim 1, wherein the controller is configured to search a known area at the at least one work location with the vision system so as to acquire and image the vision target. (Dietz [0068] reads “The camera assembly 660 may include a third camera 690 disposed along a middle portion of the bracket 670. The third camera 690 may be disposed between the first camera 680 and the second camera 682. The third camera 690 may be the same type of camera as the first camera 680 and/or the second camera 682, or a different camera type. For example, the third camera 690 may be an area scanning camera, a line scanning camera, a network camera, a three-dimensional camera, or another camera type.”); Regarding claim 9 Gilchrist/Dietz teaches The collaborative robot of claim 1, wherein the at least one workstation comprises one or more of a microplate dispenser, an environmental control module, a reader, a spinner, a centrifuge, a decapper, a capper, a plate hotel rack, a random access sample storage carousel, a high density labware stacker carousel, a sequential sample storage carousel, a weight scale, a de-lidder, a lidder, electronic pipettes, an electronic pipettor, and a media preparation module. (Gilchrist [0029] reads “Still referring to FIG. 1C, the processing stations 11110, 11120 may be linearly arranged with one or more process tools 11150-11155 which may include, but are not limited to, electronic pipettes, microplate dispensers, media preparation modules (e.g., sterilization and dispensing of sample medium), environmental control modules (e.g., refrigeration, freezers, incubators, clean environments, hoods, etc.), storage modules, and centrifuges.”); Regarding claim 10 Gilchrist/Dietz teaches The collaborative robot of claim 1, wherein the predetermined function characteristic of the at least one workstation is a function that corresponds with one or more of a microplate dispenser, an environmental control module, a reader, a spinner, a centrifuge, a decapper, a capper, a plate hotel rack, a random access sample storage carousel, a high density labware stacker carousel, a sequential sample storage carousel, a weight scale, a de-lidder, a lidder, electronic pipettes, an electronic pipettor, and a media preparation module. (Gilchrist [0029] reads “Still referring to FIG. 1C, the processing stations 11110, 11120 may be linearly arranged with one or more process tools 11150-11155 which may include, but are not limited to, electronic pipettes, microplate dispensers, media preparation modules (e.g., sterilization and dispensing of sample medium), environmental control modules (e.g., refrigeration, freezers, incubators, clean environments, hoods, etc.), storage modules, and centrifuges.” And [0027] reads “In one aspect, the at least one auto-navigating robotic processing vehicle 500, 600 is configured to provide all comporting (e.g., suitable) equipment (e.g., “process payloads” which may include process modules, peripherals, and/or consumables for station engagement, or “workpiece payloads” which may include samples and sample trays for station engagement) on the auto-navigating robotic processing vehicle 500, 600 to perform the tasks at a given processing station 11110, 11120. As an example, an auto-navigating robotic processing vehicle 500, 600 may be configured and loaded for an individual task such that all the comporting equipment is carried by a single auto-navigating robotic processing vehicle 500, 600 to complete the individual task (which may be, e.g., a process station function) in full with a single auto-navigating robotic processing vehicle 500, 600 and the items carried thereon.”); Regarding claim 11 Gilchrist/Dietz teaches The collaborative robot of claim 1, wherein the articulated robot actuator is configured to handle one or more of a plate or tray, a microscope slide tray or rack, a sample container gripper, a slide, and manually operated tools. (Gilchrist [0026] reads “For example, preprocessing conditions that may be performed by the at least one auto-navigating robotic processing vehicle 500, 600 include, but are not limited to, storage of sample trays, sample tray lids, transport and direct or indirect handoff of laboratory equipment (e.g., vacuum heads, brushes, Bunsen burners, microscopes, brooms, processing tools and/or fixtures, sample trays, etc.) to a human 199 (at a processing station 11110, 11120) and/or automated processing equipment at a processing station 11110, 11120, cleaning of an animal cage, laboratory table, etc., Examples of processes that may be performed by the at least one auto-navigating robotic processing vehicle 500, 600 include, but are not limited to, removing a sealing film from a sample and/or sample tray, reading an identification of a sample and/or sample tray, etc., pipetting fluids, capping and decapping tubes.”); Regarding claim 12 Gilchrist teaches A collaborative robot for a collaborative process facility, the robot comprising: a movable base configured so as to movably position the collaborative robot at different variable work locations in the collaborative process facility, at least one of which work locations has different variable work location characteristics; (Gilchrist [0031] reads “The autonomous drive section 550 is connected to the frame 501F and is configured to traverse (e.g., move) the carriage 501 effecting vehicle travel on and across a facility floor 180 (see, e.g., FIGS. 1A and 1C), on which the at least one processing station 11110, 11120 (FIG. 1C) and/or automated system(s) 170 (FIGS. 1A and 1B) are disposed for processing laboratory samples and/or sample holders. An autonomous navigation section 551 of the auto-navigating robotic processing vehicle 500 is communicably connected to the autonomous drive section 550 so as to effect autonomous navigation vehicle travel with the autonomous drive section 550 on the facility floor 180. The autonomous navigation section 551 may include any suitable sensors (e.g., line following, inertial navigation, GPS, stereoscopic vision sensors, etc.) and/or programming so that the auto-navigating robotic processing vehicle 500 moves along the facility floor 180 and interfaces with a human 199 (FIG. 1C) and/or a processing module 11151-11155 of a processing station 11110, 11120 (FIG. 1C), or a tool formed by automated system 170 (FIGS. 1A and 1B).”); an articulated robot actuator, based on the movable base and having a robot end effector, having a motion, driven by a drive section, with at least one degree of freedom relative to the movable base (Gilchrist [0020] reads “For example, the collaborative robot may include an articulated arm (such as one or more of articulated arms 120, 172, 422, 510A, 510A′ described herein) to which the radar sensors are mounted. The articulated arm, in in some aspects, is mounted to and borne by a relocatable cart, such as those described herein, that form interchangeable stations with “plug and play” interfaces at different selectable workstations; and in other aspects the articulated arm may form a portion of a variably configurable system with selectably variable emergent stations as described herein.”); to effect with the robot end effector a predetermined function corresponding to at least one workstation, from more than one different interchangeable workstation, (Gilchrist [0039] reads “In this aspect, the robot arm 510A′ may be a different type of arm than the robot arm 510A of the auto-navigating robotic processing vehicle 500 such that the arms may provide a different number of degrees of freedom and/or a different type of articulated arm movement to effect the processes or preprocess conditions at the at least one processing station 11110, 11120. In other aspects, the robot arm of the auto-navigating robotic processing vehicle 600 may be the same arm as the arm 510A.”); at the at least one work location, the at least one workstation having an undeterministic variable pose with respect to the at least one work location; (Gilchrist [0057] reads “In one aspect, such as where the articulated arm 1532 is cart borne, the at least one axis of motion moves at least a portion of the articulated arm portions in the collaborative space (e.g., that corresponds to a selectably variable cart location of the cart-borne articulated arm). In the aspects, of the present disclosure, the motion of the robot arm 1532 is from a first location, in which the robot arm has a first shape (see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D), to another different location of at least the portion of the robot arm 1532 in the collaborative space SPC, in which the robot arm has another different shape (again see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D). The shape of at least the portion of the robot arm 1532 at the first location is different than the another different shape of the robot arm 1532 at the other different location.”); Gilchrist does not teach a vision system connected to the articulated robot actuator and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; and a controller operably connected to the articulated robot actuator and communicably connected to the vision system register image data from the vision system of the vision target, the controller being configured so as to determine from the image data the workstation pose relative to the movable base and automatically teach the articulated robot actuator an interface location based on the workstation pose and the predetermined function characteristic identified by the image data so as to effect a predetermined deterministic interface at the interface location between the at least one workstation and robot end effector. Dietz in analogous art, teaches a vision system connected to the articulated robot actuator (Dietz [0037] reads “A first robotic arm 220 may be disposed on the first upper surface 232. The first robotic arm 220 may include any number of different interchangeable tool attachments 222, such as a picking assembly configured to grasp objects, a camera, other sensors, other suction and/or mechanical tools, or another tool attachment. The first robotic arm 220 may include, or may be coupled to, a torque resistance sensor 212 to improve safety and/or performance of the first robotic arm 220.”); and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); and a controller operably connected to the articulated robot actuator and communicably connected to the vision system register image data from the vision system of the vision target, the controller being configured so as to determine from the image data the workstation pose relative to the movable base (Dietz [0049] reads “The portable robotic assembly 300 may include one or more computer systems or controllers configured to control operation of the first robotic arm and the camera assembly. For example, a user may select a selectable option that causes the controller to initiate corresponding actions, such as one or more of: item picking, item sorting, item induction, and truck loading. The controller may be configured to: determine that the item sorting selectable option was selected, cause the boom to position the camera assembly to image a group of items, determine a first item of the group of items, and cause the first robotic arm to grasp the first item and move the first item from a first location to a second location.”); and automatically teach the articulated robot actuator an interface location based on the workstation pose and the predetermined function characteristic identified by the image data so as to effect a predetermined deterministic interface at the interface location between the at least one workstation and robot end effector. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist with that of Dietz to include a system that would allow the robot manipulator to understand its working location based on information given by a known target. This would allow the system to better handle a variety of different tasks, which would make the system more effective and useable. (Dietz [0015] reads “(15) Accordingly, at various stages and/or locations within a fulfillment center, different tasks may need to be performed, each requiring different skills. Humans may be able to perform each of the different tasks, whereas robots may need specific equipment or may not be able to move within a fulfillment center so as to reach the different locations at which the different tasks are to be performed. However, robots may be able to assist humans in performing certain tasks, or may automate certain tasks, thereby increasing human effort efficiency and/or allowing human effort to be focused on higher cognition tasks. In order to assist humans and/or autonomously perform tasks, robots need to be in the correct location. Some robots may have limited movement ability and/or may be permanently fixed at a location due to power requirements (e.g., specific power needs or connection types, etc.), due to size or footprint (e.g., the robot is too large or heavy to move, etc.), due to stability (e.g., the robot is bolted to the floor for stability, etc.), and so forth. As a result, using the same robot to perform different tasks at different locations may be desired.”); Regarding claim 13 Gilchrist/Dietz teaches The collaborative robot of claim 12, wherein the controller is configured to identify the predetermined function characteristic of the at least one workstation from the image data and automatically initialize, from different predetermined robot automatic configurations, a predetermined robot automatic configuration associated with and responsive to the identified function characteristic. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 14 Gilchrist/Dietz teaches The collaborative robot of claim 12, wherein the at least one workstation comprises one or more of a microplate dispenser, an environmental control module, a reader, a spinner, a centrifuge, a decapper, a capper, a plate hotel rack, a random access sample storage carousel, a high density labware stacker carousel, a sequential sample storage carousel, a weight scale, a de-lidder, a lidder, electronic pipettes, an electronic pipettor, and a media preparation module. (Gilchrist [0029] reads “Still referring to FIG. 1C, the processing stations 11110, 11120 may be linearly arranged with one or more process tools 11150-11155 which may include, but are not limited to, electronic pipettes, microplate dispensers, media preparation modules (e.g., sterilization and dispensing of sample medium), environmental control modules (e.g., refrigeration, freezers, incubators, clean environments, hoods, etc.), storage modules, and centrifuges.”); Regarding claim 15 Gilchrist/Dietz teaches The collaborative robot of claim 12, wherein the predetermined function characteristic of the at least one workstation is a function that corresponds with one or more of a microplate dispenser, an environmental control module, a reader, a spinner, a centrifuge, a decapper, a capper, a plate hotel rack, a random access sample storage carousel, a high density labware stacker carousel, a sequential sample storage carousel, a weight scale, a de-lidder, a lidder, electronic pipettes, an electronic pipettor, and a media preparation module. (Gilchrist [0029] reads “Still referring to FIG. 1C, the processing stations 11110, 11120 may be linearly arranged with one or more process tools 11150-11155 which may include, but are not limited to, electronic pipettes, microplate dispensers, media preparation modules (e.g., sterilization and dispensing of sample medium), environmental control modules (e.g., refrigeration, freezers, incubators, clean environments, hoods, etc.), storage modules, and centrifuges.” And [0027] reads “In one aspect, the at least one auto-navigating robotic processing vehicle 500, 600 is configured to provide all comporting (e.g., suitable) equipment (e.g., “process payloads” which may include process modules, peripherals, and/or consumables for station engagement, or “workpiece payloads” which may include samples and sample trays for station engagement) on the auto-navigating robotic processing vehicle 500, 600 to perform the tasks at a given processing station 11110, 11120. As an example, an auto-navigating robotic processing vehicle 500, 600 may be configured and loaded for an individual task such that all the comporting equipment is carried by a single auto-navigating robotic processing vehicle 500, 600 to complete the individual task (which may be, e.g., a process station function) in full with a single auto-navigating robotic processing vehicle 500, 600 and the items carried thereon.”); Regarding claim 16 Gilchrist/Dietz teaches The collaborative robot of claim 12, wherein the articulated robot actuator is configured to handle one or more of a plate or tray, a microscope slide tray or rack, a sample container gripper, a slide, and manually operated tools. (Gilchrist [0026] reads “For example, preprocessing conditions that may be performed by the at least one auto-navigating robotic processing vehicle 500, 600 include, but are not limited to, storage of sample trays, sample tray lids, transport and direct or indirect handoff of laboratory equipment (e.g., vacuum heads, brushes, Bunsen burners, microscopes, brooms, processing tools and/or fixtures, sample trays, etc.) to a human 199 (at a processing station 11110, 11120) and/or automated processing equipment at a processing station 11110, 11120, cleaning of an animal cage, laboratory table, etc., Examples of processes that may be performed by the at least one auto-navigating robotic processing vehicle 500, 600 include, but are not limited to, removing a sealing film from a sample and/or sample tray, reading an identification of a sample and/or sample tray, etc., pipetting fluids, capping and decapping tubes.”); Regarding claim 17 Gilchrist teaches A method for a collaborative robot in a collaborative process facility, the method comprising: providing the collaborative robot, the collaborative robot comprising: a movable base configured so as to movably position the collaborative robot at different variable work locations in the collaborative process facility, at least one of which work locations has different variable work location characteristics, (Gilchrist [0031] reads “The autonomous drive section 550 is connected to the frame 501F and is configured to traverse (e.g., move) the carriage 501 effecting vehicle travel on and across a facility floor 180 (see, e.g., FIGS. 1A and 1C), on which the at least one processing station 11110, 11120 (FIG. 1C) and/or automated system(s) 170 (FIGS. 1A and 1B) are disposed for processing laboratory samples and/or sample holders. An autonomous navigation section 551 of the auto-navigating robotic processing vehicle 500 is communicably connected to the autonomous drive section 550 so as to effect autonomous navigation vehicle travel with the autonomous drive section 550 on the facility floor 180. The autonomous navigation section 551 may include any suitable sensors (e.g., line following, inertial navigation, GPS, stereoscopic vision sensors, etc.) and/or programming so that the auto-navigating robotic processing vehicle 500 moves along the facility floor 180 and interfaces with a human 199 (FIG. 1C) and/or a processing module 11151-11155 of a processing station 11110, 11120 (FIG. 1C), or a tool formed by automated system 170 (FIGS. 1A and 1B).”); an articulated robot actuator, based on the movable base and having a robot end effector, having a motion, driven by a drive section, with at least one degree of freedom relative to the movable base (Gilchrist [0020] reads “For example, the collaborative robot may include an articulated arm (such as one or more of articulated arms 120, 172, 422, 510A, 510A′ described herein) to which the radar sensors are mounted. The articulated arm, in in some aspects, is mounted to and borne by a relocatable cart, such as those described herein, that form interchangeable stations with “plug and play” interfaces at different selectable workstations; and in other aspects the articulated arm may form a portion of a variably configurable system with selectably variable emergent stations as described herein.”); to effect with the robot end effector a predetermined function corresponding to at least one workstation, from more than one different interchangeable workstation, at the at least one work location, (Gilchrist [0039] reads “In this aspect, the robot arm 510A′ may be a different type of arm than the robot arm 510A of the auto-navigating robotic processing vehicle 500 such that the arms may provide a different number of degrees of freedom and/or a different type of articulated arm movement to effect the processes or preprocess conditions at the at least one processing station 11110, 11120. In other aspects, the robot arm of the auto-navigating robotic processing vehicle 600 may be the same arm as the arm 510A.”); the at least one workstation having an undeterministic variable pose with respect to the at least one work location, (Gilchrist [0057] reads “In one aspect, such as where the articulated arm 1532 is cart borne, the at least one axis of motion moves at least a portion of the articulated arm portions in the collaborative space (e.g., that corresponds to a selectably variable cart location of the cart-borne articulated arm). In the aspects, of the present disclosure, the motion of the robot arm 1532 is from a first location, in which the robot arm has a first shape (see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D), to another different location of at least the portion of the robot arm 1532 in the collaborative space SPC, in which the robot arm has another different shape (again see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D). The shape of at least the portion of the robot arm 1532 at the first location is different than the another different shape of the robot arm 1532 at the other different location.”); Gilchrist does not teach a vision system connected to the articulated robot actuator and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation, and a controller operably connected to the articulated robot actuator and communicably connected to the vision system; imaging, with the vision system, the vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; and with the controller: registering image data from the vision system of the vision target, determining from the image data the workstation pose relative to the movable base, and automatically teaching the articulated robot actuator the workstation pose so as to effect a predetermined deterministic interface, associated with the predetermined function characteristic, between the workstation and robot end effector. Dietz in analogous art, teaches a vision system connected to the articulated robot actuator (Dietz [0037] reads “A first robotic arm 220 may be disposed on the first upper surface 232. The first robotic arm 220 may include any number of different interchangeable tool attachments 222, such as a picking assembly configured to grasp objects, a camera, other sensors, other suction and/or mechanical tools, or another tool attachment. The first robotic arm 220 may include, or may be coupled to, a torque resistance sensor 212 to improve safety and/or performance of the first robotic arm 220.”); and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation, and a controller operably connected to the articulated robot actuator and communicably connected to the vision system; imaging, with the vision system, the vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); and with the controller: registering image data from the vision system of the vision target, determining from the image data the workstation pose relative to the movable base, (Dietz [0049] reads “The portable robotic assembly 300 may include one or more computer systems or controllers configured to control operation of the first robotic arm and the camera assembly. For example, a user may select a selectable option that causes the controller to initiate corresponding actions, such as one or more of: item picking, item sorting, item induction, and truck loading. The controller may be configured to: determine that the item sorting selectable option was selected, cause the boom to position the camera assembly to image a group of items, determine a first item of the group of items, and cause the first robotic arm to grasp the first item and move the first item from a first location to a second location.”); and automatically teaching the articulated robot actuator the workstation pose so as to effect a predetermined deterministic interface, associated with the predetermined function characteristic, between the workstation and robot end effector. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist with that of Dietz to include a system that would allow the robot manipulator to understand its working location based on information given by a known target. This would allow the system to better handle a variety of different tasks, which would make the system more effective and useable. (Dietz [0015] reads “(15) Accordingly, at various stages and/or locations within a fulfillment center, different tasks may need to be performed, each requiring different skills. Humans may be able to perform each of the different tasks, whereas robots may need specific equipment or may not be able to move within a fulfillment center so as to reach the different locations at which the different tasks are to be performed. However, robots may be able to assist humans in performing certain tasks, or may automate certain tasks, thereby increasing human effort efficiency and/or allowing human effort to be focused on higher cognition tasks. In order to assist humans and/or autonomously perform tasks, robots need to be in the correct location. Some robots may have limited movement ability and/or may be permanently fixed at a location due to power requirements (e.g., specific power needs or connection types, etc.), due to size or footprint (e.g., the robot is too large or heavy to move, etc.), due to stability (e.g., the robot is bolted to the floor for stability, etc.), and so forth. As a result, using the same robot to perform different tasks at different locations may be desired.”); Regarding claim 18 Gilchrist/Dietz teaches The method of claim 17, further comprising, with the controller, identifying the predetermined function characteristic of the at least one workstation from the image data and automatically initializing, from different predetermined robot automatic configurations, a predetermined robot automatic configuration associated with and responsive to the identified function characteristic. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Claim(s) 19-25 and 27-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Gilchrist (US 20220001540 A1), in further view of (Dietz US 11623339 B1), in further view of Gray (US 20230364790 A1). Regarding claim 19 Gilchrist teaches A collaborative robot for a collaborative process facility with different variable work locations in the facility, the robot comprising: a base located in the facility; (Gilchrist [0031] reads “The autonomous drive section 550 is connected to the frame 501F and is configured to traverse (e.g., move) the carriage 501 effecting vehicle travel on and across a facility floor 180 (see, e.g., FIGS. 1A and 1C), on which the at least one processing station 11110, 11120 (FIG. 1C) and/or automated system(s) 170 (FIGS. 1A and 1B) are disposed for processing laboratory samples and/or sample holders. An autonomous navigation section 551 of the auto-navigating robotic processing vehicle 500 is communicably connected to the autonomous drive section 550 so as to effect autonomous navigation vehicle travel with the autonomous drive section 550 on the facility floor 180. The autonomous navigation section 551 may include any suitable sensors (e.g., line following, inertial navigation, GPS, stereoscopic vision sensors, etc.) and/or programming so that the auto-navigating robotic processing vehicle 500 moves along the facility floor 180 and interfaces with a human 199 (FIG. 1C) and/or a processing module 11151-11155 of a processing station 11110, 11120 (FIG. 1C), or a tool formed by automated system 170 (FIGS. 1A and 1B).”); an articulated robot actuator based on the base and having a robot end effector having a motion, driven by a drive section, with at least one degree of freedom relative to the base (Gilchrist [0020] reads “For example, the collaborative robot may include an articulated arm (such as one or more of articulated arms 120, 172, 422, 510A, 510A′ described herein) to which the radar sensors are mounted. The articulated arm, in in some aspects, is mounted to and borne by a relocatable cart, such as those described herein, that form interchangeable stations with “plug and play” interfaces at different selectable workstations; and in other aspects the articulated arm may form a portion of a variably configurable system with selectably variable emergent stations as described herein.”); to effect with the robot end effector a predetermined function corresponding to at least one workstation at one of the work locations, the at least one workstation having a workstation pose with respect to the at least one work location; (Gilchrist [0039] reads “In this aspect, the robot arm 510A′ may be a different type of arm than the robot arm 510A of the auto-navigating robotic processing vehicle 500 such that the arms may provide a different number of degrees of freedom and/or a different type of articulated arm movement to effect the processes or preprocess conditions at the at least one processing station 11110, 11120. In other aspects, the robot arm of the auto-navigating robotic processing vehicle 600 may be the same arm as the arm 510A.”); Gilchrist does not teach And a controller operably connected to the articulated robot actuator so as to move the robot end effector with the drive section in the at least one degree of freedom to a taught end effector position, with a taught end effector pose, corresponding to and substantially conformal with the workstation pose so as so as to effect a predetermined deterministic interface between the at least one workstation and the robot end effector; wherein the articulated robot actuator has a compliance mode in which the drive section is back driven in the at least one degree of freedom, and with the articulated robot actuator in compliance mode, robot end effector motion biased via contact of the articulated robot actuator, at the taught end effector pose, and the at least one workstation effects compliance of the drive section and changes an end effector pose in the at least one degree of freedom from the taught end effector pose to an updated end effector pose with reduced error with the workstation pose; and wherein the controller is configured to update the taught end effector pose to the updated end effector pose and the updated end effector pose is the taught end effector pose. Dietz in analogous art, teaches And a controller operably connected to the articulated robot actuator so as to move the robot end effector with the drive section in the at least one degree of freedom to a taught end effector position, with a taught end effector pose, corresponding to and substantially conformal with the workstation pose so as so as to effect a predetermined deterministic interface between the at least one workstation and the robot end effector; (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist with that of Dietz to include a system that would allow the robot manipulator to understand its working location based on information given by a known target. This would allow the system to better handle a variety of different tasks, which would make the system more effective and useable. (Dietz [0015] reads “(15) Accordingly, at various stages and/or locations within a fulfillment center, different tasks may need to be performed, each requiring different skills. Humans may be able to perform each of the different tasks, whereas robots may need specific equipment or may not be able to move within a fulfillment center so as to reach the different locations at which the different tasks are to be performed. However, robots may be able to assist humans in performing certain tasks, or may automate certain tasks, thereby increasing human effort efficiency and/or allowing human effort to be focused on higher cognition tasks. In order to assist humans and/or autonomously perform tasks, robots need to be in the correct location. Some robots may have limited movement ability and/or may be permanently fixed at a location due to power requirements (e.g., specific power needs or connection types, etc.), due to size or footprint (e.g., the robot is too large or heavy to move, etc.), due to stability (e.g., the robot is bolted to the floor for stability, etc.), and so forth. As a result, using the same robot to perform different tasks at different locations may be desired.”); Gilchrist/Dietz does not teach wherein the articulated robot actuator has a compliance mode in which the drive section is back driven in the at least one degree of freedom, and with the articulated robot actuator in compliance mode, robot end effector motion biased via contact of the articulated robot actuator, at the taught end effector pose, and the at least one workstation effects compliance of the drive section and changes an end effector pose in the at least one degree of freedom from the taught end effector pose to an updated end effector pose with reduced error with the workstation pose; and wherein the controller is configured to update the taught end effector pose to the updated end effector pose and the updated end effector pose is the taught end effector pose. Gray in analogous art, teaches wherein the articulated robot actuator has a compliance mode in which the drive section is back driven in the at least one degree of freedom, and with the articulated robot actuator in compliance mode, robot end effector motion biased via contact of the articulated robot actuator, at the taught end effector pose, and the at least one workstation effects compliance of the drive section and changes an end effector pose in the at least one degree of freedom from the taught end effector pose to an updated end effector pose with reduced error with the workstation pose; and wherein the controller is configured to update the taught end effector pose to the updated end effector pose and the updated end effector pose is the taught end effector pose. (Gray [0049] reads “Once the operator has completed the task of manually positioning workpiece 316 into the starting location near the center of holding fixture 350, the operator activates a center location routine on controller 310 which proceeds to detect the precise location of holding fixture 350. Still referring to FIGS. 9 and 10 , in one embodiment the program causes robot 320 to move in a y-direction of the gripper coordinate system depicted in FIG. 9 relative to base 322 of robot 320 until it detects contact between first side surface 384 of workpiece 316 and side surface 360 of one jaw 354. The contact is detected by force sensor 392 (FIG. 5 ), in this case mounted within end of arm 342 of robot 320. … Next, the program causes robot 320 to move in an opposite y-direction until force sensor 392 detects contact between second side surface 386 of workpiece 316 and side surface 360 of the other jaw 354 of holding fixture 350. Controller 310 computes the center (denoted C in FIG. 10 ) from the detected locations of side surfaces 360 of jaws 354. …From the detected locations of side surfaces 360 of jaws 354, bottom surface 358 of opening 356 and end surface 362 of jaws 354, controller 310 computes the three-dimensional center point of holding fixture 350 (i.e., the loading location), and the process is repeated to average the result for improved accuracy. The center point location is stored by controller 310 as the loading location for the workpiece 316/holding fixture 350 combination.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist/Dietz with the of Grey to include a method that would allow the system to update it calibration and known location based on how the robotic end effector can effect its given workstation. This would allow the system to have more accurate manipulations after calibration is complete. (Gray [0003] reads “Presently there are several ways robot arm location and alignment are handled in industry. The most common method is to carefully teach the robot by jogging it with a teach pendant to the holding fixture location. Alternatively, an operator either uses indicators or other measuring devices to find the holding fixture location or center and program the robot, or they may have the robot grasp the target workpiece and then jog the robot to teach the location for loading the workpiece into the holding fixture. In either scenario, the process is manual and requires that the operator set the location very carefully. The process is tedious in that for many parts, particularly for CNC Mills, this process will need to be repeated for every workpiece and holding fixture combination that will be automated. Also, since the robot system and CNC systems are generally independent devices, if the holding fixture is moved (located at a different location on the CNC table), the CNC axes positions for exchanging parts are changed, the robot system is moved, or the robot grippers' grip position on the part is changed, then the operator will need to repeat the process.”); Regarding claim 20 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 19, wherein the controller is configured to move the robot end effector so that iterative compliance via iterative contact of the articulated robot actuator and at least one workstation resolves error in the taught end effector pose with the workstation pose. (Gray [0049] reads “Once the operator has completed the task of manually positioning workpiece 316 into the starting location near the center of holding fixture 350, the operator activates a center location routine on controller 310 which proceeds to detect the precise location of holding fixture 350. Still referring to FIGS. 9 and 10 , in one embodiment the program causes robot 320 to move in a y-direction of the gripper coordinate system depicted in FIG. 9 relative to base 322 of robot 320 until it detects contact between first side surface 384 of workpiece 316 and side surface 360 of one jaw 354. The contact is detected by force sensor 392 (FIG. 5 ), in this case mounted within end of arm 342 of robot 320. … Next, the program causes robot 320 to move in an opposite y-direction until force sensor 392 detects contact between second side surface 386 of workpiece 316 and side surface 360 of the other jaw 354 of holding fixture 350. Controller 310 computes the center (denoted C in FIG. 10 ) from the detected locations of side surfaces 360 of jaws 354. … From the detected locations of side surfaces 360 of jaws 354, bottom surface 358 of opening 356 and end surface 362 of jaws 354, controller 310 computes the three-dimensional center point of holding fixture 350 (i.e., the loading location), and the process is repeated to average the result for improved accuracy. The center point location is stored by controller 310 as the loading location for the workpiece 316/holding fixture 350 combination.”); Regarding claim 21 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 19, wherein the base is a movable base configured so as to movably position the articulated robot actuator at the different variable work locations in the facility, at least one of which work locations has different variable work location characteristics. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 22 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 21, wherein the workstation pose is an undeterministic variable pose of the at least one work station at the at least one of the variable work locations. (Gilchrist [0057] reads “In one aspect, such as where the articulated arm 1532 is cart borne, the at least one axis of motion moves at least a portion of the articulated arm portions in the collaborative space (e.g., that corresponds to a selectably variable cart location of the cart-borne articulated arm). In the aspects, of the present disclosure, the motion of the robot arm 1532 is from a first location, in which the robot arm has a first shape (see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D), to another different location of at least the portion of the robot arm 1532 in the collaborative space SPC, in which the robot arm has another different shape (again see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D). The shape of at least the portion of the robot arm 1532 at the first location is different than the another different shape of the robot arm 1532 at the other different location.”); Regarding claim 23 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 21, wherein the controller is configured to move the robot end effector so that iterative compliance via iterative contact between the articulated robot actuator and the at least one workstation resolves error in the taught end effector pose with each pose of the undeterministic variable workstation pose at each of the at least one variable work locations. (Gray [0049] reads “Once the operator has completed the task of manually positioning workpiece 316 into the starting location near the center of holding fixture 350, the operator activates a center location routine on controller 310 which proceeds to detect the precise location of holding fixture 350. Still referring to FIGS. 9 and 10 , in one embodiment the program causes robot 320 to move in a y-direction of the gripper coordinate system depicted in FIG. 9 relative to base 322 of robot 320 until it detects contact between first side surface 384 of workpiece 316 and side surface 360 of one jaw 354. The contact is detected by force sensor 392 (FIG. 5), in this case mounted within end of arm 342 of robot 320. … Next, the program causes robot 320 to move in an opposite y-direction until force sensor 392 detects contact between second side surface 386 of workpiece 316 and side surface 360 of the other jaw 354 of holding fixture 350. Controller 310 computes the center (denoted C in FIG. 10 ) from the detected locations of side surfaces 360 of jaws 354. … From the detected locations of side surfaces 360 of jaws 354, bottom surface 358 of opening 356 and end surface 362 of jaws 354, controller 310 computes the three-dimensional center point of holding fixture 350 (i.e., the loading location), and the process is repeated to average the result for improved accuracy. The center point location is stored by controller 310 as the loading location for the workpiece 316/holding fixture 350 combination.”); Regarding claim 24 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 19, further comprising: a vision system connected to the articulated robot actuator (Dietz [0037] reads “A first robotic arm 220 may be disposed on the first upper surface 232. The first robotic arm 220 may include any number of different interchangeable tool attachments 222, such as a picking assembly configured to grasp objects, a camera, other sensors, other suction and/or mechanical tools, or another tool attachment. The first robotic arm 220 may include, or may be coupled to, a torque resistance sensor 212 to improve safety and/or performance of the first robotic arm 220.”); and disposed to image a vision target connected to and corresponding uniquely to each of the at least one workstation so as to inform the workstation pose and identify a predetermined function characteristic of the at least one workstation; (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); and wherein the controller is operably connected to the articulated robot actuator and communicably connected to the vision system to register image data from the vision system of the vision target, the controller being configured so as to determine from the image data the workstation pose relative to the base (Dietz [0049] reads “The portable robotic assembly 300 may include one or more computer systems or controllers configured to control operation of the first robotic arm and the camera assembly. For example, a user may select a selectable option that causes the controller to initiate corresponding actions, such as one or more of: item picking, item sorting, item induction, and truck loading. The controller may be configured to: determine that the item sorting selectable option was selected, cause the boom to position the camera assembly to image a group of items, determine a first item of the group of items, and cause the first robotic arm to grasp the first item and move the first item from a first location to a second location.”); and automatically teach the articulated robot actuator the workstation pose so as to effect a predetermined deterministic interface, associated with the predetermined function characteristic, between the at least one workstation and robot end effector. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 25 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 24, wherein the controller is configured to identify the predetermined function characteristic of the at least one workstation from the image data and automatically initialize, from different predetermined robot automatic configurations, a predetermined robot automatic configuration associated with and responsive to the identified function characteristic. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 27 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 26, wherein the predetermined parameters describe at least one of type, size and pose/orientation of an article holding station of the at least one workstation to and from which the articulated robot actuator transports, pick and places the article with the robot end effector. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.”); Regarding claim 28 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 25, wherein the initialized predetermined robot automatic configuration defines a robot configuration commensurate with the identified predetermined function characteristic of the at least one workstation, the robot configuration including at least one of motion characteristics and commands. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.” And [0070] reads “The machine readable code 696 may cause the controller to initiate an action or operational mode at the portable robotic manipulation assembly. For example, if the operational mode is a picking mode, the controller may be configured to cause the first robotic arm to move a first object, and to cause the moveable camera assembly 600 to capture an image of a second object to be moved while the first object is being moved.” It would be appreciated by one with ordinary skill in the art that a pick or a pack station would require the robotic system to preform different motions or commands.); Regarding claim 29 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 25, wherein the predetermined robot automatic configuration is pre-stored in a memory of the controller, or downloaded to the controller upon determination of the workstation pose and the identity of the predetermined function characteristic of the at least one workstation. (Dietz [0069] reads “In some embodiments, the laser pointer 692 may be used to direct the moveable camera assembly 600 to a fiducial or other machine readable code 696 to set a task or operational mode of the portable robotic manipulation assembly. Other embodiments may use any suitable marker of any type instead of, or in addition to, a machine readable code, including magnetic indicators, light-based indicators, or other physical indicators. The machine readable code 696 may be used by the portable robotic manipulation assembly to determine which tasks to be performed, for calibration, to determine which operational mode to initiate, to determine a location of the portable robotic manipulation assembly, and the like. Unique fiducials, indicators, or machine readable codes can be used by the portable robotic manipulation assembly to identify work spaces, and to enable the system to automatically shift to the mode appropriate for a current work space, such as during a move from a pick station to a pack station.” And [0093] reads “The data storage 920 may store computer-executable code, instructions, or the like that may be loadable into the memory 904 and executable by the processor(s) 902 to cause the processor(s) 902 to perform or initiate various operations. The data storage 920 may additionally store data that may be copied to the memory 904 for use by the processor(s) 902 during the execution of the computer-executable instructions.”); Regarding claim 30 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 24, wherein the base is movable and has an undeterministic pose at each of the different variable work locations. (Gilchrist [0057] reads “In one aspect, such as where the articulated arm 1532 is cart borne, the at least one axis of motion moves at least a portion of the articulated arm portions in the collaborative space (e.g., that corresponds to a selectably variable cart location of the cart-borne articulated arm). In the aspects, of the present disclosure, the motion of the robot arm 1532 is from a first location, in which the robot arm has a first shape (see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D), to another different location of at least the portion of the robot arm 1532 in the collaborative space SPC, in which the robot arm has another different shape (again see, e.g., the different shapes 1690, 1695, 790, 795, 796 of the robot arms in FIGS. 6A-7D). The shape of at least the portion of the robot arm 1532 at the first location is different than the another different shape of the robot arm 1532 at the other different location.”); Regarding claim 31 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 24, wherein the controller is configured to search a known area at the at least one work location with the vision system so as to acquire and image the vision target. (Dietz [0068] reads “The camera assembly 660 may include a third camera 690 disposed along a middle portion of the bracket 670. The third camera 690 may be disposed between the first camera 680 and the second camera 682. The third camera 690 may be the same type of camera as the first camera 680 and/or the second camera 682, or a different camera type. For example, the third camera 690 may be an area scanning camera, a line scanning camera, a network camera, a three-dimensional camera, or another camera type.”); Regarding claim 32 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 24, wherein the at least one workstation comprises one or more of a microplate dispenser, an environmental control module, a reader, a spinner, a centrifuge, a decapper, a capper, a plate hotel rack, a random access sample storage carousel, a high density labware stacker carousel, a sequential sample storage carousel, a weight scale, a de-lidder, a lidder, electronic pipettes, an electronic pipettor, and a media preparation module. (Gilchrist [0029] reads “Still referring to FIG. 1C, the processing stations 11110, 11120 may be linearly arranged with one or more process tools 11150-11155 which may include, but are not limited to, electronic pipettes, microplate dispensers, media preparation modules (e.g., sterilization and dispensing of sample medium), environmental control modules (e.g., refrigeration, freezers, incubators, clean environments, hoods, etc.), storage modules, and centrifuges.”); Regarding claim 33 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 24, wherein the predetermined function characteristic of the at least one workstation is a function that corresponds with one or more of a microplate dispenser, an environmental control module, a reader, a spinner, a centrifuge, a decapper, a capper, a plate hotel rack, a random access sample storage carousel, a high density labware stacker carousel, a sequential sample storage carousel, a weight scale, a de-lidder, a lidder, electronic pipettes, an electronic pipettor, and a media preparation module. (Gilchrist [0029] reads “Still referring to FIG. 1C, the processing stations 11110, 11120 may be linearly arranged with one or more process tools 11150-11155 which may include, but are not limited to, electronic pipettes, microplate dispensers, media preparation modules (e.g., sterilization and dispensing of sample medium), environmental control modules (e.g., refrigeration, freezers, incubators, clean environments, hoods, etc.), storage modules, and centrifuges.” And [0027] reads “In one aspect, the at least one auto-navigating robotic processing vehicle 500, 600 is configured to provide all comporting (e.g., suitable) equipment (e.g., “process payloads” which may include process modules, peripherals, and/or consumables for station engagement, or “workpiece payloads” which may include samples and sample trays for station engagement) on the auto-navigating robotic processing vehicle 500, 600 to perform the tasks at a given processing station 11110, 11120. As an example, an auto-navigating robotic processing vehicle 500, 600 may be configured and loaded for an individual task such that all the comporting equipment is carried by a single auto-navigating robotic processing vehicle 500, 600 to complete the individual task (which may be, e.g., a process station function) in full with a single auto-navigating robotic processing vehicle 500, 600 and the items carried thereon.”); Regarding claim 34 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 24, wherein the articulated robot actuator is configured to handle one or more of a plate or tray, a microscope slide tray or rack, a sample container gripper, a slide, and manually operated tools. (Gilchrist [0026] reads “For example, preprocessing conditions that may be performed by the at least one auto-navigating robotic processing vehicle 500, 600 include, but are not limited to, storage of sample trays, sample tray lids, transport and direct or indirect handoff of laboratory equipment (e.g., vacuum heads, brushes, Bunsen burners, microscopes, brooms, processing tools and/or fixtures, sample trays, etc.) to a human 199 (at a processing station 11110, 11120) and/or automated processing equipment at a processing station 11110, 11120, cleaning of an animal cage, laboratory table, etc., Examples of processes that may be performed by the at least one auto-navigating robotic processing vehicle 500, 600 include, but are not limited to, removing a sealing film from a sample and/or sample tray, reading an identification of a sample and/or sample tray, etc., pipetting fluids, capping and decapping tubes.”); Regarding claim 35 Gilchrist teaches A method for a collaborative robot for a collaborative process facility with different variable work locations in the facility, the method comprising: providing the collaborative robot where the collaborative robot includes: a base located in the facility, (Gilchrist [0031] reads “The autonomous drive section 550 is connected to the frame 501F and is configured to traverse (e.g., move) the carriage 501 effecting vehicle travel on and across a facility floor 180 (see, e.g., FIGS. 1A and 1C), on which the at least one processing station 11110, 11120 (FIG. 1C) and/or automated system(s) 170 (FIGS. 1A and 1B) are disposed for processing laboratory samples and/or sample holders. An autonomous navigation section 551 of the auto-navigating robotic processing vehicle 500 is communicably connected to the autonomous drive section 550 so as to effect autonomous navigation vehicle travel with the autonomous drive section 550 on the facility floor 180. The autonomous navigation section 551 may include any suitable sensors (e.g., line following, inertial navigation, GPS, stereoscopic vision sensors, etc.) and/or programming so that the auto-navigating robotic processing vehicle 500 moves along the facility floor 180 and interfaces with a human 199 (FIG. 1C) and/or a processing module 11151-11155 of a processing station 11110, 11120 (FIG. 1C), or a tool formed by automated system 170 (FIGS. 1A and 1B).”); and an articulated robot actuator based on the base and having a robot end effector having a motion, driven by a drive section, with at least one degree of freedom relative to the base (Gilchrist [0020] reads “For example, the collaborative robot may include an articulated arm (such as one or more of articulated arms 120, 172, 422, 510A, 510A′ described herein) to which the radar sensors are mounted. The articulated arm, in in some aspects, is mounted to and borne by a relocatable cart, such as those described herein, that form interchangeable stations with “plug and play” interfaces at different selectable workstations; and in other aspects the articulated arm may form a portion of a variably configurable system with selectably variable emergent stations as described herein.”); to effect with the robot end effector a predetermined function corresponding to at least one workstation at one of the work locations, the at least one workstation having a workstation pose with respect to the at least one work location; (Gilchrist [0039] reads “In this aspect, the robot arm 510A′ may be a different type of arm than the robot arm 510A of the auto-navigating robotic processing vehicle 500 such that the arms may provide a different number of degrees of freedom and/or a different type of articulated arm movement to effect the processes or preprocess conditions at the at least one processing station 11110, 11120. In other aspects, the robot arm of the auto-navigating robotic processing vehicle 600 may be the same arm as the arm 510A.”); Gilchrist does not teach and with a controller operably connected to the articulated robot actuator: effecting movement of the robot end effector with the drive section in the at least one degree of freedom to a taught end effector position, with a taught end effector pose, corresponding to and substantially conformal with the workstation pose so as so as to effect a predetermined deterministic interface between the at least one workstation and the robot end effector, wherein the articulated robot actuator has a compliance mode in which the drive section is back driven in the at least one degree of freedom, and with the articulated robot actuator in compliance mode, robot end effector motion biased via contact of the articulated robot actuator at the taught end effector pose, and the at least one workstation effects compliance of the drive section and changes an end effector pose in the at least one degree of freedom from the taught end effector pose to an updated end effector pose with reduced error with the workstation pose; and updating, with the controller, the taught end effector pose to the updated end effector pose and the updated end effector pose is the taught end effector pose. Dietz in analogous art, teaches and with a controller operably connected to the articulated robot actuator: effecting movement of the robot end effector with the drive section in the at least one degree of freedom to a taught end effector position, with a taught end effector pose, corresponding to and substantially conformal with the workstation pose so as so as to effect a predetermined deterministic interface between the at least one workstation and the robot end effector, (Dietz [0049] reads “The portable robotic assembly 300 may include one or more computer systems or controllers configured to control operation of the first robotic arm and the camera assembly. For example, a user may select a selectable option that causes the controller to initiate corresponding actions, such as one or more of: item picking, item sorting, item induction, and truck loading. The controller may be configured to: determine that the item sorting selectable option was selected, cause the boom to position the camera assembly to image a group of items, determine a first item of the group of items, and cause the first robotic arm to grasp the first item and move the first item from a first location to a second location.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist with that of Dietz to include a system that would allow the robot manipulator to understand its working location based on information given by a known target. This would allow the system to better handle a variety of different tasks, which would make the system more effective and useable. (Dietz [0015] reads “(15) Accordingly, at various stages and/or locations within a fulfillment center, different tasks may need to be performed, each requiring different skills. Humans may be able to perform each of the different tasks, whereas robots may need specific equipment or may not be able to move within a fulfillment center so as to reach the different locations at which the different tasks are to be performed. However, robots may be able to assist humans in performing certain tasks, or may automate certain tasks, thereby increasing human effort efficiency and/or allowing human effort to be focused on higher cognition tasks. In order to assist humans and/or autonomously perform tasks, robots need to be in the correct location. Some robots may have limited movement ability and/or may be permanently fixed at a location due to power requirements (e.g., specific power needs or connection types, etc.), due to size or footprint (e.g., the robot is too large or heavy to move, etc.), due to stability (e.g., the robot is bolted to the floor for stability, etc.), and so forth. As a result, using the same robot to perform different tasks at different locations may be desired.”); Gilchrist/Dietz does not teach wherein the articulated robot actuator has a compliance mode in which the drive section is back driven in the at least one degree of freedom, and with the articulated robot actuator in compliance mode, robot end effector motion biased via contact of the articulated robot actuator at the taught end effector pose, and the at least one workstation effects compliance of the drive section and changes an end effector pose in the at least one degree of freedom from the taught end effector pose to an updated end effector pose with reduced error with the workstation pose; and updating, with the controller, the taught end effector pose to the updated end effector pose and the updated end effector pose is the taught end effector pose. Gray in analogous art, teaches wherein the articulated robot actuator has a compliance mode in which the drive section is back driven in the at least one degree of freedom, and with the articulated robot actuator in compliance mode, robot end effector motion biased via contact of the articulated robot actuator at the taught end effector pose, and the at least one workstation effects compliance of the drive section and changes an end effector pose in the at least one degree of freedom from the taught end effector pose to an updated end effector pose with reduced error with the workstation pose; and updating, with the controller, the taught end effector pose to the updated end effector pose and the updated end effector pose is the taught end effector pose. (Gray [0049] reads “Once the operator has completed the task of manually positioning workpiece 316 into the starting location near the center of holding fixture 350, the operator activates a center location routine on controller 310 which proceeds to detect the precise location of holding fixture 350. Still referring to FIGS. 9 and 10 , in one embodiment the program causes robot 320 to move in a y-direction of the gripper coordinate system depicted in FIG. 9 relative to base 322 of robot 320 until it detects contact between first side surface 384 of workpiece 316 and side surface 360 of one jaw 354. The contact is detected by force sensor 392 (FIG. 5 ), in this case mounted within end of arm 342 of robot 320. … Next, the program causes robot 320 to move in an opposite y-direction until force sensor 392 detects contact between second side surface 386 of workpiece 316 and side surface 360 of the other jaw 354 of holding fixture 350. Controller 310 computes the center (denoted C in FIG. 10 ) from the detected locations of side surfaces 360 of jaws 354. …From the detected locations of side surfaces 360 of jaws 354, bottom surface 358 of opening 356 and end surface 362 of jaws 354, controller 310 computes the three-dimensional center point of holding fixture 350 (i.e., the loading location), and the process is repeated to average the result for improved accuracy. The center point location is stored by controller 310 as the loading location for the workpiece 316/holding fixture 350 combination.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist/Dietz with the of Grey to include a method that would allow the system to update it calibration and known location based on how the robotic end effector can affect its given workstation. This would allow the system to have more accurate manipulations after calibration is complete. (Gray [0003] reads “Presently there are several ways robot arm location and alignment are handled in industry. The most common method is to carefully teach the robot by jogging it with a teach pendant to the holding fixture location. Alternatively, an operator either uses indicators or other measuring devices to find the holding fixture location or center and program the robot, or they may have the robot grasp the target workpiece and then jog the robot to teach the location for loading the workpiece into the holding fixture. In either scenario, the process is manual and requires that the operator set the location very carefully. The process is tedious in that for many parts, particularly for CNC Mills, this process will need to be repeated for every workpiece and holding fixture combination that will be automated. Also, since the robot system and CNC systems are generally independent devices, if the holding fixture is moved (located at a different location on the CNC table), the CNC axes positions for exchanging parts are changed, the robot system is moved, or the robot grippers' grip position on the part is changed, then the operator will need to repeat the process.”); Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Gilchrist/Dietz, in further view of Wartenberg (US 20230173682 A1). Regarding claim 3 Gilchrist/Dietz teaches The collaborative robot of claim 2. Gilchrist/Dietz does not teach wherein the initialized predetermined robot automatic configuration defines predetermined parameters describing the predetermined deterministic interface between the workstation and the robot end effector. Wartenberg in analogous art, teaches wherein the initialized predetermined robot automatic configuration defines predetermined parameters describing the predetermined deterministic interface between the workstation and the robot end effector. (Wartenberg [0050] reads “The system memory 210 contains instructions, conceptually illustrated as a group of modules, that control the operation of CPU 205 and its interaction with the other hardware components. … and a workspace modeling module 249 may model the workspace parameter (e.g., the dimensions, workflow, locations of the equipment and/or resources). The result of the classification, object recognition and simulation as well as the POEs of the machinery and/or human, the determined optimal path and workspace parameters may be stored in a space map 250, which contains a volumetric representation of the workspace 100 with each voxel (or other unit of representation) labeled, within the space map, as described herein. Alternatively, the space map 250 may simply be a 3D array of voxels, with voxel labels being stored in a separate database (in memory 210 or in mass storage 212).”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist/Dietz with that of Wartenberg to include a method that would allow the robotic system to be taught more information about its current working location by the system. This would allow the robotic manipulator to better preform its intended function at a given work location. (Wartenberg [0003] reads “Traditional machinery for manufacturing and other industrial applications has been supplanted by, or supplemented with, new forms of automation that save costs, increase productivity and quality, eliminate dangerous, laborious, or repetitive work, and/or augment human capability. For example, industrial robots possess strength, speed, reliability, and lifetimes that may far exceed human potential. The recent trend toward increased human-robot collaboration in manufacturing workcells imposes particularly stringent requirements on robot performance and capabilities. Conventional industrial robots are dangerous to humans and are usually kept separate from humans through guarding — e.g., robots may be surrounded by a cage with doors that, when opened, cause an electrical circuit to place the machinery in a safe state. Other approaches involve light curtains or two-dimensional (2D) area sensors that slow down or shut off the machinery when humans approach it or cross a prescribed distance threshold. These systems disadvantageously constrain collaborative use of the workspace.”); Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Gilchrist/Dietz/Gray, in further view of Wartenberg (US 20230173682 A1). Regarding claim 26 Gilchrist/Dietz/Gray teaches The collaborative robot of claim 25. Gilchrist/Dietz/Gray does not teach wherein the initialized predetermined robot automatic configuration defines predetermined parameters describing the predetermined deterministic interface between the workstation and the robot end effector. wherein the initialized predetermined robot automatic configuration defines predetermined parameters describing the predetermined deterministic interface between the workstation and the robot end effector. (Wartenberg [0050] reads “The system memory 210 contains instructions, conceptually illustrated as a group of modules, that control the operation of CPU 205 and its interaction with the other hardware components. … and a workspace modeling module 249 may model the workspace parameter (e.g., the dimensions, workflow, locations of the equipment and/or resources). The result of the classification, object recognition and simulation as well as the POEs of the machinery and/or human, the determined optimal path and workspace parameters may be stored in a space map 250, which contains a volumetric representation of the workspace 100 with each voxel (or other unit of representation) labeled, within the space map, as described herein. Alternatively, the space map 250 may simply be a 3D array of voxels, with voxel labels being stored in a separate database (in memory 210 or in mass storage 212).”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Gilchrist/Dietz with that of Wartenberg to include a method that would allow the robotic system to be taught more information about its current working location by the system. This would allow the robotic manipulator to better preform its intended function at a given work location. (Wartenberg [0003] reads “Traditional machinery for manufacturing and other industrial applications has been supplanted by, or supplemented with, new forms of automation that save costs, increase productivity and quality, eliminate dangerous, laborious, or repetitive work, and/or augment human capability. For example, industrial robots possess strength, speed, reliability, and lifetimes that may far exceed human potential. The recent trend toward increased human-robot collaboration in manufacturing workcells imposes particularly stringent requirements on robot performance and capabilities. Conventional industrial robots are dangerous to humans and are usually kept separate from humans through guarding — e.g., robots may be surrounded by a cage with doors that, when opened, cause an electrical circuit to place the machinery in a safe state. Other approaches involve light curtains or two-dimensional (2D) area sensors that slow down or shut off the machinery when humans approach it or cross a prescribed distance threshold. These systems disadvantageously constrain collaborative use of the workspace.”); Other references not Cited Throughout examination other references were found that could read onto the prior art. Though these references were not used in this examination they could be used in future examination and could read on the contents of the current disclosure. These references are, Garcia (US 20210146532 A1), Colbrunn (US 20210229279 A1), Tan (US 20230249351 A1). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN MARTIN O'MALLEY whose telephone number is (571)272-6228. The examiner can normally be reached Mon - Fri 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270 - 5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN MARTIN O'MALLEY/Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Dec 05, 2024
Application Filed
Mar 09, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month