Prosecution Insights
Last updated: April 19, 2026
Application No. 18/106,619

CONTEXT-SENSITIVE SAFETY MONITORING OF COLLABORATIVE WORK ENVIRONMENTS

Non-Final OA §103§DP
Filed
Feb 07, 2023
Examiner
KATZ, DYLAN MICHAEL
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Symbotic, LLC
OA Round
4 (Non-Final)
87%
Grant Probability
Favorable
4-5
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
242 granted / 279 resolved
+34.7% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§103 §DP
MDETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments This office action is in response to amendments filed 01/28/2026. Claims 17-48 are pending. Applicant’s arguments and amendments to Claim(s) 30, 46 with respect to Claim objections have been fully considered and are persuasive. The objections to Claim(s) 30, 46 have been withdrawn. Applicant’s arguments and amendments to the claims with respect to prior art rejections of Claims 17-31, 33-47 under 35 USC 103 have been fully considered and are persuasive. Therefore the rejections of Claims 17-31, 33-47 under 35 USC 103 are withdrawn. However a new rejection is provided in view of Kuffner, relying on Kuffner rather than Einav for the 3D first region. With respect to applicant’s arguments that the robot path planning scheme taught by Einav is not compatible with the robot path planning scheme taught by Kuffner, examiner respectfully disagrees. Kuffner is relied upon for the 3D representation of the robot used to check for collisions, so the path planning scheme used by Kuffner after determining this 3D representation is not relevant to the grounds of the rejection. One of ordinary skill in the art would understand how to compare the 3D volume of the robot taught by Kuffner with the 3D volume of the human collaborator taught by Einav to check for collisions. The motivation to incorporate this feature from Kuffner is given at least in par. 0066, as it reduces the chances of collision, improving safety. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 17, 21, 23-26, 33, 37, 39-42 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 9-11, 16-17, 18-20 of U.S. Patent No. US 10882185 (hereinafter ‘185) in view of Aberg et al (US 20170087722, hereinafter Aberg). Regarding Claims 17 and 33 and dependents, due to the near word for word matching of the claim limitation language, a detailed mapping is omitted and the table below is provided to map the claims of the present application to those of ‘185. Claims 1 and 16 of ‘185 contain each and every limitation of claim 17 and 33 of the present application, respectively, except for “simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model” and the corresponding simulating step in claim 33. However, Aberg teaches: simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model (see at least “the simulation can be run and all the collision events between the swept volume of the robot and the swept volume of the human can be tracked;” in par. 0040) and the corresponding simulating step in claim 33. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method claimed by ‘185 to incorporate the teachings of Aberg wherein the swept volumes of robot parts and a person’s body are simulated for collision checking. The motivation to incorporate the teachings of Aberg would be to improve safety during human-robot collaboration in a production area (see par. 0044) 18106619 Claim No. 10882185 Claim No. 17 1 21 2 23 1 24 9 25 10 26 11 33 16 37 17 39 16 40 18 41 19 42 20 Claims 17, 19, 23, 33, 35, 39 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2, 12 of U.S. Patent No. US 11040450 (hereinafter ‘450) in view of Aberg et al (US 20170087722, hereinafter Aberg). Regarding Claims 17 and 33, due to the near word for word matching of the claim limitation language, a detailed mapping is omitted and the table below is provided to map the claims of the present application to those of ‘450. Claims 2 and 12 of ‘450 contains each and every limitation of claim 17 and 33 of the present application except for “simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model”. (and the corresponding simulating step in Claim 33) However, Aberg teaches simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model (see at least “the simulation can be run and all the collision events between the swept volume of the robot and the swept volume of the human can be tracked;” in par. 0040); (and the corresponding simulating step in Claim 33) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method claimed by ‘450 to incorporate the teachings of Aberg wherein the swept volumes of robot parts and a person’s body are simulated for collision checking. The motivation to incorporate the teachings of Aberg would be to improve safety during human-robot collaboration in a production area (see par. 0044) 18106619 Claim No. 11040450 Claim No. 17 2 19 2 23 2 33 12 35 12 39 12 Claims 17, 21, 24-26, 33 , 37, 40-42 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 8-10, 15-19 of U.S. Patent No. US 11376741 (hereinafter ‘741) in view of Aberg et al (US 20170087722, hereinafter Aberg). Regarding Claims 17 and 33, due to the near word for word matching of the claim limitation language, a detailed mapping is omitted and the table below is provided to map the claims of the present application to those of ‘741. Claims 1 and 15 of ‘741 contains each and every limitation of claim 17 and 33 of the present application except for “simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model”. (and the corresponding simulating step in Claim 33) However, Aberg teaches simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model (see at least “the simulation can be run and all the collision events between the swept volume of the robot and the swept volume of the human can be tracked;” in par. 0040); (and the corresponding simulating step in Claim 33) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method claimed by ‘450 to incorporate the teachings of Aberg wherein the swept volumes of robot parts and a person’s body are simulated for collision checking. The motivation to incorporate the teachings of Aberg would be to improve safety during human-robot collaboration in a production area (see par. 0044) 18106619 Claim No. US 11376741 Claim No. 17 1 21 2 24 8 25 9 26 10 33 15 37 16 40 17 41 18 42 19 Claims 17, 32, 33, 48 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 9 of U.S. Patent No. US 11602852 (hereinafter ‘852) in view of Aberg et al (US 20170087722, hereinafter Aberg). Regarding Claims 17, 32, 33, and 48 due to the near word for word matching of the claim limitation language, a detailed mapping is omitted and the table below is provided to map the claims of the present application to those of ‘852. Claims 1 and 9 of ‘852 contains each and every limitation of claim 17 and 33 of the present application except for “simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model”. (and the corresponding simulating step in Claim 33) However, Aberg teaches simulate, via a simulation module, performance of at least a portion of the activity by the machinery in accordance with the stored model (see at least “the simulation can be run and all the collision events between the swept volume of the robot and the swept volume of the human can be tracked;” in par. 0040); (and the corresponding simulating step in Claim 33) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method claimed by ‘852 to incorporate the teachings of Aberg wherein the swept volumes of robot parts and a person’s body are simulated for collision checking. The motivation to incorporate the teachings of Aberg would be to improve safety during human-robot collaboration in a production area (see par. 0044) 18106619 Claim No. 11602852 Claim No. 17 1 32 1 33 9 48 9 Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 17-26, 28-29, 33-42, 44-45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Einav et al (US 20190105779 A1, hereinafter Einav) in view of Kuffner et al (US 20160016315, hereinafter Kuffner). Regarding Claim 17, Einav teaches: A safety system (see at least "a robotic system supporting simultaneous human-performed and robotic operations within a collaborative workspace" in par. 0035) for enforcing safe operation of machinery performing an activity in a three-dimensional (3D) workspace (see at least "In some embodiments, movement types include, for example, movements to reach and/or move between zones of other actions; avoidance movements to stay clear of obstructions, and in particular for safety avoidance of human body members; tracking movements to follow a moving target; guided movements, where movement is under close human supervision, for example actual physical guiding (grabbing the robot and tugging) or guidance by gestures or other indications; and/or approach movements, and in particular movements to safely approach a region where a collaborative action is to take place. In some embodiments, various types of stopping are encompassed under “movement” actions, including emergency (safety) stops, stops to await a next operation, autonomous stops to await a human operator's approach for a collaborative action;" in par. 0101 and Fig. 1A) , the system comprising: a computer memory (see at least “Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.” In par. 0054) for storing (ii) a safety protocol specifying speed restrictions of the machinery in proximity to a human and a minimum separation distance between the machinery and a human (see at least “At the next level, a proximity envelope 906 is defined, in some embodiments, by sensors which detect unexpected proximity of a robotic member to an object (e.g., a body member of a human operator 150)… A robot's safety response to proximity is optionally to treat it as a hard operating limit 908, but can also be less abrupt for example, a controller (such as control unit 160) can command the robotic arm to slow its movements, without halting entirely.” in par. 0166 and “Optionally, instead of full collision avoidance being the goal of the movement planner 920, the goal is to avoid collisions at or above some velocity threshold which is deemed to be potentially dangerous, e.g., 5 cm/sec, 10 cm/sec, 20 cm/sec, 50 cm/sec, 100 cm/sec, or another faster, slower, or intermediate collision velocity. Optionally, the velocity threshold is set asymmetrically for movements by the robot and movements by the human operator; for example, a body member of the human operator is allowed to approach the robot at a relatively higher velocity when the robot is itself moving at a relatively slow velocity (e.g., human : robot relative velocities in a 2:1, 3:1, 5:1, 7:1, 10:1 ratio or higher).” In par. 0215);and and a processor (see at least "a human-robot collaboration task cell is provided with one or more imaging devices configured, together with a suitable processor, to act as a motion tracking device for body members (e.g., arms and/or head) of a human operator" in par. 0103) computationally generate a 3D spatial representation of the workspace (see at least "The envelopes are defined, in some embodiments, based on processing of images from imaging devices 110 to determine the positions (e.g., in three dimensions; optionally in two dimensions) of the operator's respective body members. Zones of several types defined based on body member position sensing are described, for example, in relation to FIG. 2B, and FIGS. 4-9 herein. Optionally, envelopes of any of the described types are managed simultaneously, for example, safety envelopes are avoided by robotic movements while one or more appropriate targeting envelopes are sought. Moreover, there may be a plurality of safety envelopes protecting a particular human operator 150 body member at any given time, e.g., a task prediction" in par. 0172) a simulation module and a processor configured to: simulate, via the simulation module, performance of at least a portion of the activity by the machinery (see at least “For example, at point 1101, kinematic predictions by conflict predictor 932 show that continuation of robotic arm 120 along path 1102 is expected to intrude (and/or it cannot be sufficiently ruled out that path 1102 will not intrude) into the predicted kinematic envelope 1108 at some future time. Optionally, movement planner 920 diverts the motion of robotic arm 920 onto a new path 1106.” in par. 0212); a mapping module and processor configured to: map, via the mapping module, a first region of the workspace corresponding to space occupied by the machinery within the workspace (see at least “For example, at point 1101, kinematic predictions by conflict predictor 932 show that continuation of robotic arm 120 along path 1102 is expected to intrude (and/or it cannot be sufficiently ruled out that path 1102 will not intrude) into the predicted kinematic envelope 1108 at some future time. Optionally, movement planner 920 diverts the motion of robotic arm 920 onto a new path 1106.” in par. 0212); identify a second 3D region of the workspace corresponding to space occupied or potentially occupied by a human within the workspace augmented by a 3D envelope around the human corresponding to anticipated movements of the human within the workspace within a predetermined future time (see at least “For example, at point 1101, kinematic predictions by conflict predictor 932 show that continuation of robotic arm 120 along path 1102 is expected to intrude (and/or it cannot be sufficiently ruled out that path 1102 will not intrude) into the predicted kinematic envelope 1108 at some future time. Optionally, movement planner 920 diverts the motion of robotic arm 920 onto a new path 1106.” in par. 0212); and during physical performance of the activity, restrict operation of the machinery in accordance with a safety protocol based on proximity between the first and second regions to exclude contact between the machinery and the human. (see at least “It is noted that the task prediction envelope 902 is used, in some embodiments, for one or both of preventing moving a robotic part through areas where human body members are likely to be (i.e., the prediction envelope is used as a safety envelope), and targeting a robotic part to a position where collaborative interaction is expected to be indicated/requested by the human operator 150 (i.e., the prediction envelope is used as a targeting envelope).” In par. 0163 and “For example, at point 1101, kinematic predictions by conflict predictor 932 show that continuation of robotic arm 120 along path 1102 is expected to intrude (and/or it cannot be sufficiently ruled out that path 1102 will not intrude) into the predicted kinematic envelope 1108 at some future time. Optionally, movement planner 920 diverts the motion of robotic arm 920 onto a new path 1106.” in par. 0212 and “For example, a possible collision optionally is only reacted to by the movement planner 920 when the situation reaches a point beyond which the robotic arm cannot be guaranteed to respond in time to an avoidance command (this also may be understood as a type of proximity envelope, as described in relation to FIG. 8). Optionally, movement planner 920 seeks to maintain a certain minimum avoidance buffer by making small adjustments (e.g., adjustments with no more than a small time penalty) to movement early so that sudden adjustments are less likely to be needed to avoid a collision later on. Optionally, any sufficiently low-penalty path adjustment is immediately implemented to reduce collision likelihood, but high-penalty path adjustments are avoided until the no-collision guarantee is at immediate risk.” In par. 0215); Note in par. 0163 Einav makes clear that when the prediction envelop is a “safety envelope” contact with the human collaborator is avoided Einav does not appear to explicitly teach all of the following, but Kuffner does teach: a model of the machinery and its permitted movements and simulate, via a simulation module, performance of at least a portion of the activity by the machinery, in accordance with the stored model (see at least “The one or more parameters of the one or more physical components of the robotic device that are involved in performing the physical action may include a maximum torque of the one or more physical components, a maximum power output of the one or more physical components, angles of joints of the one or more physical components, distances between two or more physical components, an effective mass of the one or more physical components (i.e., how much and a momentum of the one or more physical components, a range of motion of the one or more physical components, among other possibilities.” In par. 0051 and “Within examples, the computing device may employ one or more types of model predictive control (e.g., receding horizon control), or other types of process control to determine the one or more estimated trajectories and to perform other operations of the method 300. For instance, given a current state of the robotic device and its physical components, such as current accelerations, power outputs, momentums, etc. of one or more of the physical components, the computing device may determine a plurality of states that the robotic device and its components may be in within a certain period of time.” In par. 0053 and “Next, the computing device may compare the estimated trajectory of the moving object with the one or more estimated trajectories of the physical components of the robotic device determined at block 304. For example, the computing device may determine whether one or more estimated trajectories of the moving object will intersect or otherwise interfere with the one or more estimated trajectories of the physical components of the robotic device, and perhaps limiting this determination to intersection that may occur within a predetermined time period, in some embodiments.” In par. 0085 ) a mapping module and processor configured to: map, via the mapping module, a first 3D region of the workspace corresponding to space occupied by the machinery within the workspace augmented by a real-time dynamic 3D envelope around the machinery spanning movements simulated by the simulation module (see at least “Within examples, a new virtual representation may be determined each time a new instruction or sub-instruction is received or executed by the computing device. Thus, the virtual representation may be an animated representation that changes in real-time as a new instruction is received or a modification to the estimated trajectory is determined.” In par. 0066 and " Within still further examples, the indication may take the form of a 3D model of the virtual representation. The 3D model may depict one or more 3D paths representative of the determined one or more estimated trajectories. Further, the 3D model may include an open quadric surface, a closed quadric surface, convex hull, isosurface, or other complex or non-complex surface/volume." in par. 0073 and "The computing device may then determine whether the one or more estimated volumes of space of the moving object overlap with one or more volumes of space that the robotic device may occupy within the predetermined period of time. " in par. 0085 ) ; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav to incorporate the teachings of Kuffner wherein a model of the physical components of the robot is used to sweep out the future volume the robot will occupy over time and compare it to the volume a nearby person occupies over time to avoid collisions, in order to arrive at using dynamic 3D envelopes for both the robot and the human collaborator taught by Einav. The motivation to incorporate the teachings of Kuffner would be to reduce risk of collision (see par. 0066), which improves safety. Regarding Claim 18, Einav as modified by Kuffner teaches The safety system of claim 17 (see Claim 17 analysis). Einav further teaches: wherein the simulation module is configured to dynamically simulate the first and second regions of the workspace based at least in part on current states associated with the machinery and the human (see at least " In some embodiments, the upcoming operation is at least sometimes at least somewhat indeterminate, but the system optionally still plans and execute motions as though the next operation will be, for example, the most frequently performed (or otherwise predictively preferred) next operation within the current task context." in par. 0162 and “Kinematic envelope 904, in some embodiments, provides a safety envelope which uses recent position tracking of body members of the human operator 150 to predict where those body members could and/or likely will be during a robotic motion. In some embodiments, the prediction is based on a motion model of the human operator 150, optionally including calculation of potential changes in acceleration and velocity at the different joints of the human operator's 150 body members. In some embodiments, the prediction is observation-based, e.g., finding past-observed situations which have similarity to a human operator's 150 current motions, and predicting where the motion is likely to continue to, based on what happened in those past-observed situations. There is optionally interaction, in some embodiments, between a purely kinematic envelope 904 and a task prediction envelope 902: for example, a task prediction envelope 902 is refined in real time (during movements of robot and/or operator) based on kinematics; and/or the current task scenario (current operation, for example) is used to select which kinematic envelope 904 is most relevant to current movements.” In par. 0165, note the combination of Einav and Kuffner to arrive at the first 3D region applies here as well but is covered in the independent claim 17 analysis and not repeated with each instance of the first 3D region) , wherein the current states comprise at least one of current positions, current orientations, expected positions associated with a next action in the activity, expected orientations associated with the next action in the activity, velocities, accelerations, geometries and/or kinematics (see at least “Kinematic envelope 904, in some embodiments, provides a safety envelope which uses recent position tracking of body members of the human operator 150 to predict where those body members could and/or likely will be during a robotic motion. In some embodiments, the prediction is based on a motion model of the human operator 150, optionally including calculation of potential changes in acceleration and velocity at the different joints of the human operator's 150 body members. In some embodiments, the prediction is observation-based, e.g., finding past-observed situations which have similarity to a human operator's 150 current motions, and predicting where the motion is likely to continue to, based on what happened in those past-observed situations. There is optionally interaction, in some embodiments, between a purely kinematic envelope 904 and a task prediction envelope 902: for example, a task prediction envelope 902 is refined in real time (during movements of robot and/or operator) based on kinematics; and/or the current task scenario (current operation, for example) is used to select which kinematic envelope 904 is most relevant to current movements.” in par. 0165 and “For example, at point 1101, kinematic predictions by conflict predictor 932 show that continuation of robotic arm 120 along path 1102 is expected to intrude (and/or it cannot be sufficiently ruled out that path 1102 will not intrude) into the predicted kinematic envelope 1108 at some future time. Optionally, movement planner 920 diverts the motion of robotic arm 920 onto a new path 1106.” in par. 0212). Regarding Claim 19, Einav as modified by Kuffner teaches The safety system of claim 17 (see Claim 17 analysis). Einav further teaches: wherein the first region is confined to a spatial region reachable by the machinery only during performance of the activity (see at least “In some embodiments, the upcoming operation is known to the system, for example, because it is the next operation in a predefined sequence of operations. In some embodiments, the next operation is indicated to the system by the human operator 150, for example by gestures and/or spoken commands. In some embodiments, the human operator indication selects from among a restricted number of possible options defined by a process flow of the task. In some embodiments, the upcoming operation is at least sometimes at least somewhat indeterminate, but the system optionally still plans and execute motions as though the next operation will be, for example, the most frequently performed (or otherwise predictively preferred) next operation within the current task context.” in par. 0162 and “For example, at point 1101, kinematic predictions by conflict predictor 932 show that continuation of robotic arm 120 along path 1102 is expected to intrude (and/or it cannot be sufficiently ruled out that path 1102 will not intrude) into the predicted kinematic envelope 1108 at some future time. Optionally, movement planner 920 diverts the motion of robotic arm 920 onto a new path 1106.” in par. 0212 note the combination of Einav and Kuffner to arrive at the first 3D region applies here as well but is covered in the independent claim 17 analysis and not repeated with each instance of the first 3D region). Regarding Claim 20, Einav as modified by Kuffner teaches The safety system of claim 17 (see Claim 17 analysis). Einav does not appear to explicitly teach all of the following, but Kuffner does teach: wherein the first 3D region includes a global spatial region reachable by the machinery during performance of any activity (see at least “To mitigate the risk of injury to humans, robotic devices of either type may operate in environments that may be isolated from humans. In particular, such an isolated environment may include a volume of reachable space by the robotic device, where the volume is enclosed by what may be referred to herein as referred to as a “static safety cage.” in par. 0019). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Kuffner wherein the envelope representing the robot encompasses a static volume of any space reachable by the robot. The motivation to incorporate the teachings of Kuffner would be to totally mitigate the risk of injury to humans around the robot (see par. 0019). Regarding Claim 21, Einav as modified by Kuffner teaches the safety system of claim 17 (see Claim 17 analysis). Einav further teaches: wherein the workspace is computationally represented as a plurality of voxels (see at least “Optionally, zones are defined as basic geometrical shapes or parts thereof, for example, cylinders, ellipsoids, spheres, cones, pyramids, and/or cubes.” in par. 0106). Regarding Claim 22, Einav as modified by Kuffner teaches The safety system of claim 17 (see Claim 17 analysis). Einav further teaches further comprising a computer vision system (see at least " In some embodiments, imaging devices 110 (cameras) are operable to optically monitor working areas of the task cell 100. In some embodiments, imaging devices 110 image markers indicating positions and/or movements of body members (for example, hands, arms and/or head) of human operator 150. In some embodiments, monitored operator body member positions and/or movements are used in the definition of safety envelopes, for example, to guide motion planning for robots 120, 122. In some embodiments, control unit 160 performs analysis of images from imaging devices 110 and/or plans and/or controls the execution of movements of robots 120, 122. " in par. 0138) that itself comprises: a plurality of sensors distributed about the workspace, each of the sensors being associated with a grid of pixels for recording images of a portion of the workspace within a sensor field of view (see at least "In some embodiments, imaging devices 110 (cameras) are operable to optically monitor working areas of the task cell 100. In some embodiments, imaging devices 110 image markers indicating positions and/or movements of body members (for example, hands, arms and/or head) of human operator 150. " in par. 0138) , and an object-recognition module, stored in the computer memory and configured for effecting, with the processor, recognizing the human (see at least "in some embodiments, monitored operator body member positions and/or movements are used in the definition of safety envelopes, for example, to guide motion planning for robots 120, 122." in par. 0138 ) and the machinery and movements thereof (see at least “In some embodiments, proximity is detected optically (for example, using the imaging devices 110).” in par. 0166). Einav does not appear to explicitly teach all of the following, but Kuffner does teach: the images including depth information (see at least “In the following description, the terms “sensor,” “camera,” or “optical sensor” may be used interchangeably and may refer to device or devices (mono or stereo arrangements) configured to perform 3D image sensing, 3D depth sensing” in par. 0017) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Kuffner wherein 3D depth sensors are used to scan and model the environment. The motivation to incorporate the teachings of Kuffner would be to avoid collisions and make motion planning easier (see par. 0029) Regarding Claim 23, Einav as modified by Kuffner teaches The safety system of claim 22 (see Claim 22 analysis). Einav further teaches: wherein the workspace portions collectively cover the entire workspace (see at least “In some embodiments, imaging devices 110 (cameras) are operable to optically monitor working areas of the task cell 100.” in par. 0138 and Fig. 1A). Regarding Claim 24, Einav as modified by Kuffner teaches the safety system of claim 17 (see Claim 17 analysis), Einav does not appear to explicitly teach all of the following, but Kuffner does teach: wherein the first 3D region is divided into a plurality of nested, spatially distinct 3D subzones (see at least “Conversely, as predicted future motions of humans or other moving objects begin to overlap with robot, or when humans or other moving objects generally approach the robotic device, the computing device may instruct the robotic device to slow down and may begin to gradually shrink the virtual safety cage. In some examples, if a human or other moving object is detected to be within a predetermined and calibratable threshold distance, the robotic device may stop all its motion completely and the virtual safety cage may be set to zero (i.e., disappear). Other examples are possible as well.” In par. 0090). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Kuffner wherein a human overlap with a larger outer part of the robot’s virtual safety cage only triggers the robot to slow down when overlap with a smaller inner safety cage of a threshold distance causes an immediate stop. The motivation to incorporate the teachings of Kuffner would be to improve safety while maintaining efficiency under normal circumstances (see par. 0091) Regarding Claim 25, Einav as modified by Kuffner teaches The safety system of claim 24 (see Claim 24 analysis), Einav does not appear to explicitly teach all of the following, but Kuffner does teach: wherein overlap between the second 3D region and each of the subzones results in a different degree of alteration of the operation of the machinery (see at least “Conversely, as predicted future motions of humans or other moving objects begin to overlap with robot, or when humans or other moving objects generally approach the robotic device, the computing device may instruct the robotic device to slow down and may begin to gradually shrink the virtual safety cage. In some examples, if a human or other moving object is detected to be within a predetermined and calibratable threshold distance, the robotic device may stop all its motion completely and the virtual safety cage may be set to zero (i.e., disappear). Other examples are possible as well.” In par. 0090). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Kuffner wherein a human overlap with a larger outer part of the robot’s virtual safety cage only triggers the robot to slow down when overlap with a smaller inner safety cage of a threshold distance causes an immediate stop. The motivation to incorporate the teachings of Kuffner would be to improve safety while maintaining efficiency under normal circumstances (see par. 0091) Regarding Claim 26, Einav as modified by Kuffner teaches the safety system of claim 17 (see Claim 17 analysis). Einav does not appear to explicitly teach all of the following, but Kuffner does teach: wherein the processor is further configured to recognize a workpiece being handled by the machinery and treat the workpiece as a portion thereof in identifying the first 3D region (see at least “For instance, the computing device may determine a traced-out volume in space using a point on a surface of the physical object that is farthest from a reference point on the robotic device, the traced-out volume being representative of space that the robotic device and physical object will occupy during at least a portion of the time that will elapse as the robotic device picks up and moves the physical object.” in par. 0064). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Kuffner wherein the volume occupied by an object grasped by the robot is added to the swept volume of the robot. The motivation to incorporate the teachings of Kuffner would be to avoid collisions and make motion planning easier (see par. 0029) Regarding Claim 28, Einav as modified by Kuffner teaches the safety system of claim 17 (see Claim 17 analysis). Einav further teaches: wherein the processor is configured to dynamically control operation of the machinery so that it may be brought to a safe state without contacting a human in proximity thereto (see at least “In some embodiments, movement planner 920 uses envelopes 1108, 1110 to adjust robotic movements (and/or other robotic actions) to avoid (e.g. for safety) and or seek (e.g., for collaborative actions) the positions of body members of human operator 150, producing a new or adjusted movement plan 921.” in par. 0211 ). Regarding Claim 29, Einav as modified by Kuffner teaches the safety system of claim 17 (see Claim 17 analysis), wherein the processor is further configured to: Einav further teaches: acquire scanning data of the machinery and the human during performance of the task (see at least “In some embodiments, imaging devices 110 (cameras) are operable to optically monitor working areas of the task cell 100.” in par. 0138); and update the first and second 3D regions based at least in part on the scanning data of the machinery and the human operator, respectively (see at least “In some embodiments, proximity is detected optically (for example, using the imaging devices 110).” in par. 0166). Regarding Claim 33, Einav as modified by Kuffner also teaches (references to Einav): A method (see at least " a method of controlling a robot in a collaborative workspace" in par. 0025) for implementing the safety system of Claim 17 (see Claim 17 analysis for rejection of the system) Regarding Claim 34, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 18 (see Claim 18 analysis for rejection of the system) Regarding Claim 35, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 19 (see Claim 19 analysis for rejection of the system) Regarding Claim 36, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 20 (see Claim 20 analysis for rejection of the system) Regarding Claim 37, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 21 (see Claim 21 analysis for rejection of the system) Regarding Claim 38, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 22 (see Claim 22 analysis for rejection of the system) Regarding Claim 39, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 23 (see Claim 23 analysis for rejection of the system) Regarding Claim 40, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 24 (see Claim 24 analysis for rejection of the system) Regarding Claim 41, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 25 (see Claim 25 analysis for rejection of the system) Regarding Claim 42, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 26 (see Claim 26 analysis for rejection of the system) Regarding Claim 44, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 28 (see Claim 28 analysis for rejection of the system) Regarding Claim 45, Einav as modified by Kuffner also teaches: A method for implementing the safety system of Claim 29 (see Claim 29 analysis for rejection of the system) Claim(s) 27, 43 is/are rejected under 35 U.S.C. 103 as being unpatentable over Einav et al (US 20190105779 A1, hereinafter Einav) in view of Kuffner et al ( US 20160016315, hereinafter Kuffner) and Kikkeri et al (US 20160354927, hereinafter Kikkeri) Regarding Claim 27, Einav as modified by Kuffner teaches The safety system of claim 17 (see Claim 17 analysis). Einav and Kuffner do not appear to explicitly teach all of the following, but Kikkeri does teach: wherein the processor is further configured to recognize a workpiece being handled by the human and treat the workpiece as a portion of the human in identifying the second 3D region (see at least “Using any of these systems, when an object 112, such as a person, enters the detection space, the depth sensing camera 104 determines that a group of voxels 114 having a proximate location to each other is within a collected frame. The group of voxels 114 is identified as a connected object. The grouping of the voxels to form the connected object may be controlled by a previously determined error range. Thus, the grouping may proceed even if the voxels detected by the depth sensing camera 104 are not filling all of the space of the connected object. The grouping is repeated for the next frame, forming a connected object within that frame. The location of the connected object in the next frame is compared to the location of the connected object in the previous frame to determine motion vectors for the voxels. Voxels with inconsistent motion vectors may be eliminated, and the remaining group of voxels 114 may be identified as a moving connected object (MCO).” in par. 0028 and Fig. 1). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Kikkeri wherein voxels occupied by an object being carried by a human are grouped as one moving connected object (MCO) as long as their motion vectors match. The motivation to incorporate the teachings of Kikkeri would be to reduce the complexity of the collision avoidance processing (see par. 0035), which makes collision avoidance computation more efficient. Regarding Claim 43, Einav as modified by Kuffner and Kikkeri also teaches: A method for implementing the safety system of Claim 27 (see Claim 27 analysis for rejection of the system) Claim(s) 30-31, 46-47 is/are rejected under 35 U.S.C. 103 as being unpatentable over Einav et al (US 20190105779 A1, hereinafter Einav) in view of Kuffner et al ( US 20160016315, hereinafter Kuffner) and Nihei et al (US 20090091286, hereinafter Nihei) Regarding Claim 30, Einav as modified by Kuffner teaches The safety system of claim 17 (see Claim 17 analysis). Einav and Kuffner do not appear to explicitly teach all of the following, but Nihei does teach: wherein the processor is further configured to stop the machinery during physical performance of the activity if the machinery is determined to have deviated from operating outside the simulated 3D region (see at least “In the case where each shaft of the robot or a forward end portion of the tool operates deviating from a respectively set operation range, electric power supplied to a motor of the robot is shut off by this operating range monitoring function and the robot is stopped. As a result, each shaft or the working tool of the robot is prevented from colliding with peripheral devices.” in par. 0005 and Fig. 6). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Nihei wherein the robot is stopped before deviating from a simulated workspace free of peripheral devices. The motivation to incorporate the teachings of Nihei would be to avoid collisions with peripheral devices (see par. 0014), which improve safety. Regarding Claim 31, Einav as modified by Kuffner teaches the safety system of claim 17 (see Claim 17 analysis). Einav and Kuffner do not appear to explicitly teach all of the following, but Nihei does teach: wherein the processor is further configured to preemptively stop the machinery during physical performance of the activity based on predicted operation of the machinery before a potential deviation event such that inertia does not cause the machine to deviate outside of the simulated 3D region (see at least “FIG. 5 is a view showing a map of the inertial running distance. As shown in FIG. 5, the inertial running distance h is previously determined in the form of a map as a function of the moving speed V of the robot 20 and the weight W of the working tool 21. The weight W of the working tool 21 is decided according to the content of working to be executed by the robot 20. The moving speed V of the robot 20 is calculated according to the position of the robot 20 periodically detected by the position detector 26. The inertial running distance h is found from the map shown in FIG. 5” in par. 0037 and “Then, in step 103, the arriving range X1, to which the inertial running distance h is added, is calculated. Specifically, the arriving range X1 is calculated so that it can be circumscribed with all of the plurality of spheres 62 shown in FIG. 4c and displayed on LCD 41. As shown in the drawing, the calculated arriving range X1 is slightly larger than the operating range X0. The three-dimension lattice 61 and the spheres 62 are arranged and the arriving range X1 is calculated as described above by the arriving range calculation means 36 of the robot control unit 30.” In par. 0039 and “The emergency stopping means 34 shown in FIG. 2 is started when each shaft of the robot 20 and the working tool 21 deviate from the operating range X0 at the time of the actual operation of the robot 20. Due to the foregoing, supply of electric power to the servo amplifier 35 is shut off. As a result, the servo motor 25 receives no electric power and the robot 20 is stopped. Therefore, the emergency stopping means 34 can prevent the working tool 21 etc. of the robot 20 from actually colliding with the peripheral devices 50.” In par. 0048). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Einav as modified by Kuffner to incorporate the teachings of Nihei wherein the robot is stopped preemptively before a point where even after shutting off the robot the momentum of the robot and carried load could carry it outside the operating range of the robot. The motivation to incorporate the teachings of Nihei would be to avoid collisions with peripheral devices (see par. 0014), which improve safety. Regarding Claim 46, Einav as modified by Kuffner and Nihei also teaches: A method for implementing the safety system of Claim 30 (see Claim 30 analysis for rejection of the system) Regarding Claim 47, Einav as modified by Kuffner and Nihei also teaches: A method for implementing the safety system of Claim 31 (see Claim 31 analysis for rejection of the system) Allowable Subject Matter Claims 32, 48 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The closest prior art comes from Einav and Kuffner but the prior art does not appear to teach “wherein the processor is further configured to (i) identify one or more third 3D regions of the workspace within each of which a fixed object prevents a proximity between a human and the machinery from being less than the minimum separation distance, and (ii) during physical performance of the activity, not restrict operation of the machinery when one or more humans only occupy at least one said third region.” in combination with all of the other limitations in the independent claims. Note Double Patenting rejections above would also need to be overcome. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN M KATZ whose telephone number is (571)272-2776. The examiner can normally be reached Mon-Thurs. 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN M KATZ/ Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Feb 07, 2023
Application Filed
Nov 04, 2024
Non-Final Rejection — §103, §DP
Feb 07, 2025
Response Filed
Mar 10, 2025
Final Rejection — §103, §DP
Aug 15, 2025
Response after Non-Final Action
Aug 15, 2025
Notice of Allowance
Oct 15, 2025
Request for Continued Examination
Oct 22, 2025
Response after Non-Final Action
Oct 24, 2025
Non-Final Rejection — §103, §DP
Jan 28, 2026
Response Filed
Mar 06, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596378
Autonomous Control and Navigation of Unmanned Vehicles
2y 5m to grant Granted Apr 07, 2026
Patent 12594663
ROBOT SYSTEM AND CART
2y 5m to grant Granted Apr 07, 2026
Patent 12589499
Mobile Construction Robot
2y 5m to grant Granted Mar 31, 2026
Patent 12589491
METHODS, SYSTEMS, AND DEVICES FOR MOTION CONTROL OF AT LEAST ONE WORKING HEAD
2y 5m to grant Granted Mar 31, 2026
Patent 12582491
CONTROL OF A SURGICAL INSTRUMENT HAVING BACKLASH, FRICTION, AND COMPLIANCE UNDER EXTERNAL LOAD IN A SURGICAL ROBOTIC SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+20.8%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month