Prosecution Insights
Last updated: April 19, 2026
Application No. 18/695,858

Methods of Handling Safety of Industrial Robot, Control Systems, and Robot System

Final Rejection §101§102
Filed
Mar 27, 2024
Examiner
EMMETT, MADISON B
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
ABB Schweiz AG
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
125 granted / 158 resolved
+27.1% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
193
Total Applications
across all art units

Statute-Specific Performance

§101
19.2%
-20.8% vs TC avg
§103
45.3%
+5.3% vs TC avg
§102
26.1%
-13.9% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 158 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Pending 1-17, 19-21 Cancelled 18 35 U.S.C. 101 1-12, 14-17, 20-21 35 U.S.C. 102 1-17, 19-21 Response to Arguments This office action is in response to applicant’s arguments and amendments filed 11/10/2025, which are in response to USPTO Office Action mailed 08/12/2025. Applicant’s arguments and amendments have been considered with the results that follow: THIS ACTION IS MADE FINAL. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12, 14-17, 20-21 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites: “A method of handling safety of an industrial robot, the method comprising: providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot; obtaining environment data of a physical environment of the industrial robot by means of an environment sensor; determining, by a control system, an obstacle position of an obstacle in the environment based on the environment data; and evaluating, by the control system, a border position of each virtual safety border with regard to the obstacle position.” These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “an industrial robot; defined in relation to the industrial robot; obtaining environment data of a physical environment of the industrial robot by means of an environment sensor; by a control system; by the control system”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind. For example, a human can, in their mind, perform a method of handling safety of an industrial robot including, providing at least one virtual safety border, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot, determining an obstacle position of an obstacle in the environment based on the environment data, and evaluating a border position of each virtual safety border with regard to the obstacle position. This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The an industrial robot, [border] defined in relation to the industrial robot, means of an environment sensor, and a control system is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)). The obtaining environment data of a physical environment of the industrial robot step is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of an industrial robot, [border] defined in relation to the industrial robot, means of an environment sensor, and a control system is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept. The additional step of obtaining environment data of a physical environment of the industrial robot step is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter. Claim 5 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites: “A method of handling safety of an industrial robot, the method comprising: obtaining environment data of a physical environment of the industrial robot by means of an environment sensor; determining, by a control system, an obstacle position of an obstacle in the environment based on the environment data; and defining, by the control system, at least one virtual safety border in relation to the industrial robot based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot.” These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “an industrial robot; obtaining environment data of a physical environment of the industrial robot by means of an environment sensor; by a control system; by the control system”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind. For example, a human can, in their mind, perform a method of handling safety, including determining an obstacle position of an obstacle in the environment based on the environment data, and defining at least one virtual safety border in relation to the industrial robot based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot. This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The an industrial robot, means of an environment sensor, and a control system is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)). The obtaining environment data of a physical environment of the industrial robot step is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of an industrial robot, means of an environment sensor, and a control system is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept. The additional step of obtaining environment data of a physical environment of the industrial robot step is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter. Claim 8 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites: “A control system for handling safety of an industrial robot, the control system comprising at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of: providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot; obtaining environment data of a physical environment of the industrial robot from an environment sensor; determining an obstacle position of an obstacle in the environment based on the environment data; and evaluating a border position of each virtual safety border with regard to the obstacle position.” These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “a control system for handling safety of an industrial robot, the control system comprising at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of: defined in relation to the industrial robot; obtaining environment data of a physical environment of the industrial robot from an environment sensor”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind. For example, a human can, in their mind, provide at least one virtual safety border, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot, determine an obstacle position of an obstacle in the environment based on the environment data, and evaluate a border position of each virtual safety border with regard to the obstacle position. This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The a control system for handling safety of an industrial robot, at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which causes the at least one data processing device to perform steps, [border] defined in relation to the industrial robot, and an environment sensor is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)). The obtaining environment data of a physical environment of the industrial robot step is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of a control system for handling safety of an industrial robot, at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which causes the at least one data processing device to perform steps, [border] defined in relation to the industrial robot, and an environment sensor is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept. The additional step of obtaining environment data of a physical environment of the industrial robot step is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter. Claim 12 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites: “A control system for handling safety of an industrial robot, the control system comprising at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of: obtaining environment data of a physical environment of the industrial robot from an environment sensor; determining an obstacle position of an obstacle in the environment based on the environment data; and defining at least one virtual safety border in relation to the industrial robot based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot.” These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “a control system for handling safety of an industrial robot, the control system comprising at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of: obtaining environment data of a physical environment of the industrial robot from an environment sensor; in relation to the industrial robot”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind. For example, a human can, in their mind, determine an obstacle position of an obstacle in the environment based on the environment data, and define at least one virtual safety border based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot. This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The a control system for handling safety of an industrial robot, at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which causes the at least one data processing device to perform the steps, an environment sensor, and [border] in relation to the industrial robot is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)). The obtaining environment data of a physical environment of the industrial robot step is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of a control system for handling safety of an industrial robot, at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which causes the at least one data processing device to perform the steps, an environment sensor, and [border] in relation to the industrial robot is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept. The additional step of obtaining environment data of a physical environment of the industrial robot step is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter. Claim 14 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites: “A robot system comprising an industrial robot, an environment sensor and a control system for handling safety of the industrial robot, the control system having at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of: providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot; obtaining environment data of a physical environment of the industrial robot from an environment sensor, determining an obstacle position of an obstacle in the environment based on the environment data; and evaluating a border position of each virtual safety border with regard to the obstacle position.” These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “a robot system comprising an industrial robot, an environment sensor and a control system for handling safety of the industrial robot, the control system having at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of: in relation to the industrial robot; obtaining environment data of a physical environment of the industrial robot from an environment sensor”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind. For example, a human can, in their mind, provide at least one virtual safety border defined, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot, determine an obstacle position of an obstacle in the environment based on the environment data, and evaluate a border position of each virtual safety border with regard to the obstacle position. This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The a robot system, an industrial robot, an environment sensor, a control system for handling safety of the industrial robot, at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which causes the at least one data processing device to perform the steps, and [border] in relation to the industrial robot is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)). The obtaining environment data of a physical environment of the industrial robot step is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of a robot system, an industrial robot, an environment sensor, a control system for handling safety of the industrial robot, at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which causes the at least one data processing device to perform the steps, and [border] in relation to the industrial robot is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept. The additional step of obtaining environment data of a physical environment of the industrial robot step is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter. Claim(s) 2-4, 9-11, 16 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1 and 8. The claim(s) recite(s) evaluating, determining, defining a protective stop, and defining a distance steps which can reasonably be performed in the human mind. Thus, the claim(s) contain(s) ineligible subject matter. Claim(s) 6, 15, 17, 21 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1, 8, 14. The claim(s) recite(s) the environment sensor carried by the industrial robot and the obstacle is stationary in the environment at a high level of generality to generically link the use of the abstract idea in a particular technological environment. Thus, the claim(s) contain(s) ineligible subject matter. Claim(s) 7 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1. The claim(s) recite(s) moving the industrial robot in the environment to obtain the environment data by means of the environment sensor steps which is/are mere data gathering, manipulation, and transmission, and is/are a well-understood, routine, and conventional function, and thus is/are no more than insignificant extra-solution activity. See MPEP 2106.05(g). Thus, the claim(s) contain(s) ineligible subject matter. Claim(s) 20 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1. The claim(s) recite(s) automatically verifying, based on the evaluation, the border position steps which can reasonably be performed in the human mind. The claim(s) recite(s) by the control system at a high level of generality to generically link the use of the abstract idea in a particular technological environment. Thus, the claim(s) contain(s) ineligible subject matter. Claim(s) 13, 19 recite(s) limitations that incorporate the abstract idea into a practical application. Thus, the claim(s) contain eligible subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-17 and 19-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cole et al. (US 2019/0262993 A1, “Cole”). Regarding claim 1: Cole teaches: A method of handling safety of an industrial robot, the method comprising (see at least [0003] methods for determining dynamic safety zones for robots): providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot (see at least [0019] FIGS. 3 and 4, safe zones; static safe zone; [0020] dynamic safe zones; [0022] Fig. 4, zone 78 volumetrically extended around component as needed; stop zone, slow zone, and normal monitored zone); obtaining environment data of a physical environment of the industrial robot by means of an environment sensor (see at least [0020] sensors, controller; [0028] sensor feeds to build model of safe motion areas and rules that update; [0012] position and rate gyros, position transducers, cameras, accelerometers, strain gauges; cameras, radar, light curtains); determining, by a control system, an obstacle position of an obstacle in the environment based on the environment data (see at least [0019] alter behavior of robot if people or property intrude upon safe zone; [0020] person can be near robot, and if person does not intrude into dynamic safe zone, robot can continue to operate; sensors monitor operation of robot and proximity of person to robot); and evaluating, by the control system, a border position of each virtual safety border with regard to the obstacle position (see at least [0019] FIGS. 3 and 4, safe zones; static safe zone; [0020] dynamic safe zones; person can be near robot, if person does not intrude into dynamic safe zone, robot can continue to operate; sensors monitor operation of robot and proximity of person to robot; [0022] Fig. 4, zone 78 volumetrically extended around component; stop zone, slow zone, normal monitored zone; [0029] zone rules updated on the fly in response to external input and system monitoring). Regarding claim 2: Cole further teaches: The method according to claim 1, wherein the evaluation of the border position comprises determining whether any obstacle is positioned between the industrial robot and the virtual safety border (see at least [0019] FIGS. 3 and 4; [0020]; [0022]; [0023] robot modify motion pathway to accommodate impingement on safe zone requirement; [0052] degradation in robot speed as a person or foreign object intrudes into the space; reduce robot speed in direct proportion to how far person or unwanted object is to robot). Regarding claim 3: Cole further teaches: The method according to claim 1, wherein the evaluation of the border position comprises determining a safety distance between the virtual safety border and the obstacle when the virtual safety border is positioned between the industrial robot and the obstacle (see at least [0019] FIGS. 3 and 4, safe zones; [0020]; [0022]; [0023]; [0052]). Regarding claim 4: Cole further teaches: The method according to claim 3, wherein the at least one condition comprises a protective stop of the industrial robot, and wherein the evaluation of the border position further includes evaluating the safety distance in view of a braking distance of the industrial robot for the protective stop (see at least [0042]; [0052]). Regarding claim 5: Cole teaches: A method of handling safety of an industrial robot, the method comprising (see at least [0003] methods for determining dynamic safety zones for robots): obtaining environment data of a physical environment of the industrial robot by means of an environment sensor (see at least [0020]; [0028]; [0012]); determining, by a control system, an obstacle position of an obstacle in the environment based on the environment data (see at least [0019]; [0020]); and defining, by the control system, at least one virtual safety border in relation to the industrial robot based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot (see at least [0019] FIGS. 3 and 4, safe zones; [0020]; [0022]). Regarding claim 6: Cole further teaches: The method according to claim 1, wherein the environment sensor is carried by the industrial robot (see at least [0020]; [0012]; see Fig. 1). Regarding claim 7: Cole further teaches: The method according to claim 6, further comprising moving the industrial robot in the environment to obtain the environment data by means of the environment sensor (see at least [0021]; [0023]-[0027]; [0056]; [0057]). Regarding claim 8: Cole teaches: A control system for handling safety of an industrial robot, the control system comprising at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of (see at least [0001]; [0013]; [0016]; [0017]): providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot (see at least [0019] FIGS. 3 and 4,; [0020]; [0022]); obtaining environment data of a physical environment of the industrial robot from an environment sensor (see at least [0020]; [0028]; [0012]); determining an obstacle position of an obstacle in the environment based on the environment data (see at least [0019]; [0020]); and evaluating a border position of each virtual safety border with regard to the obstacle position (see at least [0019] FIGS. 3 and 4; [0020]; [0022]; [0029]). Regarding claim 9: Cole further teaches: The control system according to claim 8, wherein the evaluation of the border position comprises determining whether any obstacle is positioned between the industrial robot and the virtual safety border (see at least [0019] FIGS. 3 and 4; [0020]; [0022]; [0023]; [0052]). Regarding claim 10: Cole further teaches: The control system according to claim 8, wherein the evaluation of the border position comprises determining a safety distance between the virtual safety border and the obstacle when the virtual safety border is positioned between the industrial robot and the obstacle (see at least [0019] FIGS. 3 and 4; [0020]; [0022]; [0023]; [0052]). Regarding claim 11: Cole further teaches: The control system according to claim 10, wherein the at least one condition comprises a protective stop of the industrial robot, and wherein the evaluation of the border position further includes evaluating the safety distance in view of a braking distance of the industrial robot for the protective stop (see at least [0042]; [0052]). Regarding claim 12: Cole teaches: A control system for handling safety of an industrial robot, the control system comprising at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of (see at least [0001]; [0013]; [0016]; [0017]): obtaining environment data of a physical environment of the industrial robot from an environment sensor (see at least [0020]; [0028]; [0012]); determining an obstacle position of an obstacle in the environment based on the environment data (see at least [0019]; [0020]); and defining at least one virtual safety border in relation to the industrial robot based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot (see at least [0019] FIGS. 3 and 4; [0020]; [0022]). Regarding claim 13: Cole further teaches: The control system according to claim 8, wherein the environment sensor is carried by the industrial robot (see at least [0020]; [0012]; Fig. 1), and wherein the at least one computer program comprises program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the step of commanding movement of the industrial robot in the environment to obtain the environment data by means of the environment sensor (see at least [0016]; [0021]; [0023]-[0027]; [0056]; [0057]). Regarding claim 14: Cole teaches: A robot system comprising an industrial robot, an environment sensor and a control system for handling safety of the industrial robot (see at least [0001]; [0020]; [0012]; [0013]), the control system having at least one data processing device and at least one memory having at least one computer program stored thereon, the at least one computer program having program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the steps of (see at least [0013]; [0016]; [0017]): providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot (see at least [0019] FIGS. 3 and 4; [0020]; [0022]); obtaining environment data of a physical environment of the industrial robot from an environment sensor (see at least [0020]; [0028]; [0012]), determining an obstacle position of an obstacle in the environment based on the environment data (see at least [0019]; [0020]); and evaluating a border position of each virtual safety border with regard to the obstacle position (see at least [0019] FIGS. 3 and 4; [0020]; [0022]; [0029]). Regarding claim 15: Cole further teaches: The robot system according to claim 14, wherein the environment sensor is carried by the industrial robot (see at least [0020]; [0012]; Fig. 1). Regarding claim 16: Cole further teaches: The method according to claim 2, wherein the evaluation of the border position comprises determining a safety distance between the virtual safety border and the obstacle when the virtual safety border is positioned between the industrial robot and the obstacle (see at least [0019] FIGS. 3 and 4; [0020]; [0022]; [0023]; [0052]). Regarding claim 17: Cole further teaches: The method according to claim 2, wherein the environment sensor is carried by the industrial robot (see at least [0020]; [0012]; Fig. 1). Regarding claim 19: Cole further teaches: The control system according to claim 9, wherein the environment sensor is carried by the industrial robot (see at least [0020]; [0012]; Fig. 1), and wherein the at least one computer program comprises program code which, when executed by the at least one data processing device, causes the at least one data processing device to perform the step of commanding movement of the industrial robot in the environment to obtain the environment data by means of the environment sensor (see at least [0016]; [0021]; [0023]-[0027]; [0056]; [0057]). Regarding claim 20: Cole further teaches: The method according to claim 1, further comprising automatically verifying, by the control system and based on the evaluation, the border position of each virtual safety border (see at least [0006] FIG. 3 depicts differences between static and dynamic safety zones. [0049] real-time to update the dynamic safe zones. if a sensor is deemed to be unreliable, the use of that sensor to adjust dynamic safe zone or determine intrusion into dynamic safe zone can be adjusted. sensor feed with low reliability can give a larger dynamic safe zone. adjusting the dynamic safe zones as function of sensor reliability characteristics. controller can determine the sensor characteristic and adjust the dynamic zone accordingly. [0044] changes in dynamic safe zones can occur automatically by the controller). Regarding claim 21: Cole further teaches: The method according to claim 20, wherein the obstacle is stationary in the environment (see at least [0053] If the robot routinely encroaches upon a concrete pillar in the workspace which causes routine slowdown of the robot as it traverses that space, the motion profile can likewise be adjusted). Response to Arguments Applicant's arguments filed 11/10/2025 have been fully considered but they are not persuasive. Regarding the 35 U.S.C. 101 Rejections: Applicant arguments are provided in the indented italicized sections. Examiner responses follow each argument in regular font sections. The rejection alleges, at page 4, that "other than ... the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind" but this statement concedes that claim 1 does recite elements that do, in fact, preclude the steps being performed as a mental process. Examiner respectfully disagrees. The very first step of the subject matter eligibility test for products and processes states “Establish the Broadest Reasonable Interpretation of the Claim as a Whole.” Keeping this in mind is important for the entire claim analysis as performed by Examiner. The claim is a method, and therefore falls within a statutory category, and analysis cannot be streamlined as eligibility of the claim is not self-evident. Thus, we come to Step 2A of the subject matter eligibility flow chart, which asks “Is the claim directed to a law of nature, a natural phenomenon (product of nature), or an abstract idea?” and Prong One “Does the claim recite an abstract idea, law of nature, of natural phenomenon?” The answer to both is “Yes”. The claim recites: “A method of handling safety of an industrial robot, the method comprising: providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot; obtaining environment data of a physical environment of the industrial robot by means of an environment sensor; determining, by a control system, an obstacle position of an obstacle in the environment based on the environment data; and evaluating, by the control system, a border position of each virtual safety border with regard to the obstacle position.” Here, a method of handling safety can be, for example, a human thinking of how they can avoid a robot while the robot is working and moving. Here, providing at least one virtual safety border ... where each virtual safety border is associated with a condition to be fulfilled by the industrial robot can be, for example, a human thinking up and drawing, with a pen and paper, an outline of a safety border that goes around a robot that the human also draws on the piece of paper after thinking it up in their head. Here, determining ... an obstacle position of an obstacle in the environment based on the environment data can be, for example, a human reading data from a list on paper and determining that the robot is close to hitting an obstacle in its workspace. Here, evaluating ... a border position of each virtual safety border with regard to the obstacle position can be, for example, a human viewing their drawing of a border surrounding a robot and considering the object near the robot and thinking/evaluating that the safety border is either safe or not safe with respect to the obstacle position. As such, the claim recites an abstract idea. These limitations are those considered by Examiner to be the mental process. Then Examiner considers the additional elements that are identified as the underlined and italicized parts of the claim. These components are identified as additional elements, and therefore are not considered as being part of the mental process. The identification of these additional elements is not a concession that claim 1 recites elements that do preclude the steps from being performed as a mental process (as Applicant suggests), but is a way to show that the elements require further analysis according to step 2A Prong Two and step 2B. The rejection alleges further, at page 4, that "a human can, in their mind, perform a method of handling safety of an industrial robot including, providing at least one virtual safety border, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot". However, this assertion ignores that the human mind has no way of causing a condition to be fulfilled by an industrial robot. That is, the industrial robot is unable to fulfill a condition associated with a virtual safety border provided only in a human mind. The claim states “providing at least one virtual safety border defined in relation to the industrial robot, where each virtual safety border is associated with a condition to be fulfilled by the industrial robot.” Nowhere in the claim is the robot actually fulfilling said condition. The condition is recited as “to be fulfilled” which is a form of a possible future state, not a current actual state. As such, a human can look at a robot, think and/or draw a buffer bubble around the robot, and think “I would like this border to encompass the robot,” and be executing the limitation entirely in their head according to the broadest reasonable interpretation of the claim. Applicant’s statement that “the industrial robot is unable to fulfill a condition associated with a virtual safety border provided only in a human mind” is of no issue, since the condition is only being defined in the claim and there is no execution step of the robot needing to actually execute the condition as described. The rejection also alleges that the human mind can "determin[e] an obstacle position of an obstacle in the environment based on the environment data". Such assertion ignores that the human mind has no way to interact with the environment sensor that delivers the environment data. As stated above, determining . . . an obstacle position of an obstacle in the environment based on the environment data can be, for example, a human reading data from a list on paper and determining that the robot is close to hitting an obstacle in its workspace. MPEP 2106.04(a)(2)(III) states: The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 (noting that the claimed "conversion of [binary-coded decimal] numerals to pure binary numerals can be done mentally," i.e., "as a person would do it by head and hand."); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1139, 120 USPQ2d 1473, 1474 (Fed. Cir. 2016) (holding that claims to a mental process of "translating a functional description of a logic circuit into a hardware component description of the logic circuit" are directed to an abstract idea, because the claims "read on an individual performing the claimed steps mentally or with pencil and paper"). Mental processes performed by humans with the assistance of physical aids such as pens or paper are explained further below with respect to point B. Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortgage Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324, 117 USPQ2d 1693, 1699 (Fed. Cir. 2016) (holding that computer-implemented method for "anonymous loan shopping" was an abstract idea because it could be "performed by humans without a computer"). Mental processes recited in claims that require computers are explained further below with respect to point C. As such, a person can perform the recited limitation in their mind and/or with the aid of pen and paper. Further, the independent claims specify that a control system performs the determination of obstacle position based on the environment data. The claims recite structural and functional elements, including interactions of the control system with the environment sensor, and further specify operations performed by the control system for controlling the industrial robot to maintain safety in a given environment. The “structural and functional elements” as Applicant describes are identified in step 2A of the subject matter eligibility analysis. The claim includes an industrial robot, [border] defined in relation to the industrial robot, means of an environment sensor, a control system, and obtaining environment data of a physical environment of the industrial robot step as additional elements. Starting at step 2A Prong Two, the an industrial robot, [border] defined in relation to the industrial robot, means of an environment sensor, and a control system is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment. MPEP 2106.05(h)) recites: Examples of limitations that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception include: ... Identifying the participants in a process for hedging risk as commodity providers and commodity consumers, because limiting the use of the process to these participants did no more than describe how the abstract idea of hedging risk could be used in the commodities and energy markets, Bilski, 561 U.S. at 595, 95 USPQ2d at 1010; Limiting the use of the formula C = 2 (pi) r to determining the circumference of a wheel as opposed to other circular objects, because this limitation represents a mere token acquiescence to limiting the reach of the claim, Flook, 437 U.S. at 595, 198 USPQ at 199; Specifying that the abstract idea of monitoring audit log data relates to transactions or activities that are executed in a computer environment, because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer, FairWarning v.Iatric Sys., 839 F.3d 1089, 1094-95, 120 USPQ2d 1293, 1295 (Fed. Cir. 2016); ... Limiting the abstract idea of collecting information, analyzing it, and displaying certain results of the collection and analysis to data related to the electric power grid, because limiting application of the abstract idea to power-grid monitoring is simply an attempt to limit the use of the abstract idea to a particular technological environment, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016); ... Language specifying that the abstract idea of budgeting was to be implemented using a "communication medium" that broadly included the Internet and telephone networks, because this limitation merely limited the use of the exception to a particular technological environment, Intellectual Ventures I v. Capital One Bank, 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1640 (Fed. Cir. 2015); ... These examples include generically recited computer/robotic environments, collection and use of sensor data, and control systems. The obtaining environment data of a physical environment of the industrial robot step is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity. MPEP 2106.05(g)) recites: Below are examples of activities that the courts have found to be insignificant extra-solution activity: Mere Data Gathering: Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989); Testing a system for a response, the response being used to determine system malfunction, In re Meyers, 688 F.2d 789, 794; 215 USPQ 193, 196-97 (CCPA 1982); Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price, OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93; Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011); Consulting and updating an activity log, Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754; and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis). Selecting a particular data source or type of data to be manipulated: Limiting a database index to XML tags, Intellectual Ventures I LLC v. Erie Indem. Co., 850 F.3d at 1328-29, 121 USPQ2d at 1937; Taking food orders from only table-based customers or drive-through customers, Ameranth, 842 F.3d at 1241-43, 120 USPQ2d at 1854-55; Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016); and Requiring a request from a user to view an advertisement and restricting public access, Ultramercial, 772 F.3d at 715-16, 112 USPQ2d at 1754. Insignificant application: Cutting hair after first determining the hair style, In re Brown, 645 Fed. App'x 1014, 1016-1017 (Fed. Cir. 2016) (non-precedential); and Printing or downloading generated menus, Ameranth, 842 F.3d at 1241-42, 120 USPQ2d at 1854-55. These examples include the generally recited obtaining of environment data. Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. This ends the step 2A Prong Two analysis. The rejection alleges further that a human can, in their mind, evaluate a border position of each virtual safety border with regard to an obstacle position. However, this allegation ignores that the human mind cannot arrive at the obstacle position from the environment data delivered from the environment sensor. A human can easily evaluate more than one border position virtually in their mind. In the example of a robot, a human can see the base of the robot and the tool end of the robot and determine that the base is unmoving due to being bolted to the floor, while the tool end moves in a volume of space. As such, the human can evaluate that a box sitting next to the base of the robot, not moving, and not within reach of the tool end, would not make a border defined around the base unsafe. However, a dog running around the workspace area and nearly colliding with the robot tool end would likely be evaluated as being within a safety border of the robot. This would likely require reassessment of the safety border position by the human. The determination of an obstacle position is based on environment data. This can be as simple as reading a spreadsheet of data that happens to come from a sensor that says 1 or 0, where 1 is when something is touching the robot and 0 is nothing is touching the robot. Based on this kind reading, a human can determine an obstacle position (i.e., in contact with the robot, or not in contact with the robot). Moreover, the independent claims require that the control system evaluates the border position of each virtual safety border. Although the rejection alleges that an industrial robot, environment sensor, and control system are recited at a high level of generality, the claims further define the interaction of the environment sensor with the control system and thereby make the invention directed to a practical application. Claim 1, for example, defines a handling of a virtual safety border by a control system (e.g., "determining, by a control system, an obstacle position", "evaluating, by the control system, a border position of each virtual safety border"). The virtual safety border is in turn defined as being associated with a condition to be fulfilled by the industrial robot. This handling of at least one virtual safety border by the control system is not possible to perform in the human mind since a virtual safety border merely present in the human mind will never be associated with a condition for an industrial robot (since no industrial robot will fulfill a condition associated with a virtual safety border only provided in a human mind). The presence of a virtual safety border in the control system, as opposed to in the human mind, therefore highlights that the pending claims are not merely directed to an abstract idea and amounts to significantly more than the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Looking to step 2B, the additional elements of an industrial robot, [border] defined in relation to the industrial robot, means of an environment sensor, and a control system is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept. MPEP 2106.05(I)(A) recites: Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include: Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h)). The additional step of obtaining environment data of a physical environment of the industrial robot step is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function. MPEP 2106.05(d)(II) recites: The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Performing repetitive calculations, Flook, 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values); Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."); Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank, 776 F.3d 1343, 1348, 113 USPQ2d 1354, 1358 (Fed. Cir. 2014) (optical character recognition); and A Web browser’s back and forward button functionality, Internet Patent Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015). ... Courts have held computer‐implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). ... Below are examples of other types of activity that the courts have found to be well-understood, routine, conventional activity when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Recording a customer’s order, Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1244, 120 USPQ2d 1844, 1856 (Fed. Cir. 2016); Shuffling and dealing a standard deck of cards, In re Smith, 815 F.3d 816, 819, 118 USPQ2d 1245, 1247 (Fed. Cir. 2016); Restricting public access to media by requiring a consumer to view an advertisement, Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 716-17, 112 USPQ2d 1750, 1755-56 (Fed. Cir. 2014); Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; Determining an estimated outcome and setting a price, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; and Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015). As such, the step of obtaining environment data of a physical environment of the industrial robot step is a well-understood, routine, and conventional function thus is no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). As a result, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter. Regarding the 35 U.S.C. 102 Rejections: Applicant arguments are provided in the indented italicized sections. Examiner responses follow each argument in regular font sections. Applicant respectfully asks the Examiner to reconsider the rejections in view of the below remarks. Cole discloses a robot 50 operating with a dynamic safe zone 78. Sensors 57, 58 can monitor operation of the robot 50 as well as the proximity of a person to the robot 50. At paragraph [0049], Cole teaches: “The technique described in item 2) a. will be understood to include the ability to assess the ability of sensors to provide information to the controller 55 useful to enact the dynamic safe zones, and thereafter alter the dynamic safe zones as a result of the assessment. Such assessment can be performed prior to developing a motion profile, but may also be determined real-time to update the dynamic safe zones. The assessment can include a determination of reliability, accuracy, cycle times, run rates, etc. To set forth just a few nonlimiting examples, if a sensor is deemed to be unreliable (e.g. determined via redundancy fault checking), the use of that sensor to either adjust the dynamic safe zone or determine intrusion into the dynamic safe zone can be adjusted.” Thus, Cole is concerned with altering safe zones based on the performance of safety sensors, not on any evaluation of the zones themselves. Examiner respectfully disagrees. The zones are entirely dependent upon the sensors. Paragraph [0049] is further describing the nature of the relationship between the sensors and the zones. This defining of the relationship enables the zones to be evaluated. Cole paragraph [0049] recites: [0049] The technique described in item 2) a. will be understood to include the ability to assess the ability of sensors to provide information to the controller 55 useful to enact the dynamic safe zones, and thereafter alter the dynamic safe zones as a result of the assessment. Such assessment can be performed prior to developing a motion profile, but may also be determined real-time to update the dynamic safe zones. The assessment can include a determination of reliability, accuracy, cycle times, run rates, etc. To set forth just a few nonlimiting examples, if a sensor is deemed to be unreliable (e.g. determined via redundancy fault checking), the use of that sensor to either adjust the dynamic safe zone or determine intrusion into the dynamic safe zone can be adjusted. A sensor feed with low reliability can give rise to a larger dynamic safe zone. In similar fashion, a sensor with low accuracy and/or low sample rate can give rise to larger dynamic safe zones. A relationship can thus be built and carried with the controller 55 for adjusting the dynamic safe zones as a function of the sensor reliability characteristics (e.g. accuracy, sample rate, etc). The controller 55 can determine the sensor characteristic and, using the predetermined relationship, adjust the dynamic zone accordingly. Some characteristics of the sensors may have one or more levels to which the dynamic safe zone is adjusted. If a sensor has high, medium, or low reliability that can give rise to a first, second, and third dynamic safe zone adjustment. The adjustments to the dynamic safe zones can be incremental across all of the different sensor feeds, or the adjustments can be a one-size-fits-all approach. Furthermore, some levels of adjustments can be incremental, while other levels of adjustments are one-size-fits-all. If the sensors did not exist, or did not relate to the safety zones at all, then the safety zones could be defined arbitrarily in space around the robot, but the robot would have no way of being able to follow the rules set by the zones. If a zone was defined as “when at location (X,Y,Z) and pressure at point P on robot arm, the zone is not safe” and the robot had no idea where (X,Y,Z) was, had no ability to measure pressure, and did not know where point P on its arm was, then the zone definition would be meaningless. Conversely, defining a zone based on sensor data and the accuracy of the data gives the robot the ability to evaluate the safety zone. If the safety zone is defined as “when at location (X,Y,Z) and pressure at point P on robot arm, the zone is not safe”, and the robot knew that its global 3D coordinate orientation had 99.99% accuracy, pressure sensor 99.99% accuracy, and knew exactly where point P was on its arm, then it could ascertain where the precise zone was defined in space, and could determine with a high degree of confidence that the safety zone was not safe due to sensing pressure at that position and location. Similarly, if the global 3D coordinate orientation was 50% accurate, pressure sensor was 99.99% accurate, and knew within a few centimeters where point P was on its arm, then it could not determine with a high degree of confidence where the safety zone was defined in space, and thus could not evaluate whether the pressure that was sensed is at a safe or unsafe location, since the position of the arm in space is not really known. Thus, the sensor inaccuracy would directly cause the safety buffer to be evaluated as not accurate. Cole fails to disclose or teach "evaluating, by the control system, a border position of each virtual safety border with regard to the obstacle position," as recited in claims 1, 8, and 14. The rejection alleges, at pages 15-16, that Cole paragraphs [0019], [0020], [0022], and [0029] teach this. Applicant respectfully disagrees. The cited passages of Cole do not disclose such evaluation, and the rejection does not explain what such evaluation would be. Paragraph [0019] states that a dynamic safe zone can be monitored to alter behavior of the robot 50 if people or property intrude upon the safe zone. This means that the dynamic safe zone is monitored for people or property entering the zone. The monitoring and altering of robot behavior does not equate to an evaluation of a border position of the safe zone with regard to a position of people or property. Paragraph [0019] also describes: "If a person were to intrude into the static safe zone 74, the robot 50 can be commanded to cease operation by the controller 55 until the person exits." However, this is not an evaluation of a safe zone border position; rather, it is an evaluation of a person's position within a safe zone. The ceased operation of the robot does not constitute an evaluation of a border position of the safe zone. Examiner respectfully disagrees. The claim limitation states: “evaluating, by the control system, a border position of each virtual safety border with regard to the obstacle position.” This does not quantify, qualify, or define the type of evaluation. Based on the broadest reasonable interpretation of the claim, evaluating can be determining the significance, worth or condition of usually by careful appraisal and study. Cole paragraph [0019] recites: [0019] The controller 55 can include a number of features structured to provide a dynamic safe zone which can be monitored to alter behavior of the robot 50 if people or property intrude upon the safe zone. Turning now to FIGS. 3 and 4, illustrations are provided to further describe some features of the dynamic safe zones described in detail further below. FIG. 3 depicts a robot arm extended into a first position (Position 1) and a second position (Position 2). In a known approach to defining a safe zone around the robot 50, a static safe zone 74 is defined in row 76 in which the robot 50 is allowed to operate. The safe zone 74 is dubbed ‘static’ because it is structured to remain the same regardless of robot position/ orientation/ operation within the safe zone. If a person were to intrude into the static safe zone 74, the robot 50 can be commanded to cease operation by the controller 55 until the person exits. The “dynamic safe zone which can be monitored to alter behavior of the robot 50 if people or property intrude upon the safe zone” is a form of evaluating the safety zone by determining whether a condition of a human or object has entered the zone. The “If a person were to intrude into the static safe zone 74, the robot 50 can be commanded to cease operation by the controller 55 until the person exits” demonstrates that the safety zone is evaluated to have its border further away than the position of the person, and as such the person is inside the safety zone. Here, determining a state of the safety border of the robot in its environment including obstacles is an evaluation of the border position of the safety border with regard to the obstacle position. Paragraph [0020] teaches continued operation of a robot when a person is not in the safe zone, stating: "a person can intrude into spaces near the robot 50, and so long as the person does not intrude into the dynamic safe zone 78, the robot 50 can continue to operate." Again, the continued operation of the robot does not constitute an evaluation of a border position of the safe zone with regard to a position of the person. Paragraph [0020] further indicates that the controller 55 or other pertinent device can adjust the size and/or shape of the dynamic safety zone 78 to determine the nature of the robot 50 actions. Such adjustment is not made based on an obstacle position of an obstacle and is thus irrelevant for an evaluation of a border position of a virtual safety border with regard to an obstacle position. Paragraphs [0022] and [0029] are silent regarding an obstacle, and even less an evaluation of a border position of a virtual safety border with regard to an obstacle position. Examiner respectfully disagrees. Determining a state of the safety border of the robot in its environment including obstacles is an evaluation of the border position of the safety border with regard to the obstacle position. Cole paragraphs [0020], [0022], and [0029] recite: [0020] The instant application provides for the ability to produce one or more dynamic safe zones 78 as illustrated in one embodiment shown in row 80. The safe zone 78 is dubbed ‘dynamic’ because it can be changed during one or more portions of operation of the robot 50. In the contemplated embodiments of the instant application, a person can intrude into spaces near the robot 50, and so long as the person does not intrude into the dynamic safe zone 78, the robot 50 can continue to operate. As such, sensors (e.g. 57 and/or 58) can monitor operation of the robot 50 as well as the proximity of a person to the robot 50 (and/or to any particular component 52), and if necessary the controller 55 or other pertinent device can adjust the size and/or shape of the dynamic safety zone 78 to determine the nature of the robot 50 actions. [0022] One nonlimiting form of a dynamic safe zone 78 is shown in FIG. 4. The safe zone 78 is depicted in two dimensions in the illustration across a cross section of the robot component 52, but it will be appreciated that such a zone 78 can be volumetrically extended around the component 52 as needed. The dynamic safe zone 78 is composed of three different areas, a stop zone 82, a slow zone 84, and a normal monitored zone 86. [0029] 1) A dynamically generated set of safe zone rules that is updated on the fly in response to external input and internal system monitoring The “a person can intrude into spaces near the robot 50, and so long as the person does not intrude into the dynamic safe zone 78, the robot 50 can continue to operate” is a form of evaluating the safety zone by determining whether a condition of a human or object has entered the zone. This is performed via the “monitor operation of the robot”. The “safe zone 78 is dubbed ‘dynamic’ because it can be changed during one or more portions of operation of the robot” reflects the ability to adapt the safety position in response to monitoring and evaluation of the safety border with respect to the environment and obstacles. A safety border would not need to change or be adjusted in the operating conditions and surroundings stayed constant. A constant environment would enable the safety border to maintained in an initial state without issue. Paragraphs [0022] and [0029] are used here to further demonstrate the “normal monitored zone” and the ability of the system to be “updated on the fly in response to external input and internal system monitoring.” That is, the system evaluates a border position of each virtual safety border with regard to the obstacle position. Applicant's Specification provides, as an example of an evaluation of a border position, that "the braking distance should be equal to or shorter than the safety distance" where the safety distance 52 is between the virtual border and an obstacle outside the virtual border. That is, embodiments of the invention check whether the virtual border is set in the right place with respect to an obstacle to maintain safety. Cole does not teach or suggest this. By way of the evaluation claimed in claims 1, 8, and 14, one or more border positions in a physical environment can be automatically verified. This in turn generates large time savings, as described in paragraph [0018] of Applicant's Specification. Accordingly, reconsideration of the rejections of claims 1, 8, and 14 made under 35 U.S.C. § 102 is respectfully requested. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., an example of an evaluation of a border position, that "the braking distance should be equal to or shorter than the safety distance") are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Claims 5 and 12 recite "defining, by the control system, at least one virtual safety border in relation to the industrial robot based on the obstacle position, the at least one virtual safety border being associated with a condition to be fulfilled by the industrial robot." The rejection alleges, at pages 17-18, that Cole paragraphs [0019], [0020] and [0022] teach this. Applicant respectfully disagrees. Paragraph [0019] teaches altering behavior of a robot if people or property intrude into a safe zone. However, nowhere does Cole suggest that the safe zone would be defined based on a position of the people or property. Paragraph [0020] states: "The safe zone 78 ... can be changed during one or more portions of operation of the robot 50." Cole merely describes a changing safe zone, but fails to suggest that the safe zone is defined based on an obstacle position. Paragraph [0020] further describes: "a person can intrude into spaces near the robot 50, and so long as the person does not intrude into the dynamic safe zone 78, the robot 50 can continue to operate." This statement, however, does not disclose that the safe zone is defined based on a position of the person. Paragraph [0020] also teaches adjusting the size and/or shape of the safe zone 78. Such adjustment is not made based on an obstacle position of an obstacle and is thus irrelevant for a definition of at least one virtual safety border based on an obstacle position. Paragraph [0022] merely describes the form of a dynamic safe zone 78. There is no mention of an obstacle, let alone defining at least one virtual safety border in relation to the industrial robot based on an obstacle position. Accordingly, reconsideration of the rejections of claims 5 and 12 made under 35 U.S.C. § 102 is respectfully requested. Paragraph [0019] is referenced to demonstrate that the safety zones are monitored with respect to people and property that might intrude upon the zone in order to change robot behavior in the event there is an intrusion into the zone (“The controller 55 can include a number of features structured to provide a dynamic safe zone which can be monitored to alter behavior of the robot 50 if people or property intrude upon the safe zone.”) Paragraph [0020] is referenced to demonstrate that the safety zones are dynamic and can be adjusted during robot operations (“The instant application provides for the ability to produce one or more dynamic safe zones 78 as illustrated in one embodiment shown in row 80. The safe zone 78 is dubbed ‘dynamic’ because it can be changed during one or more portions of operation of the robot 50.”) Paragraph [0020] also describes the monitoring of the robot’s proximity to a person, and can change the safety zones to adjust the robot actions (“As such, sensors (e.g. 57 and/or 58) can monitor operation of the robot 50 as well as the proximity of a person to the robot 50 (and/or to any particular component 52), and if necessary the controller 55 or other pertinent device can adjust the size and/or shape of the dynamic safety zone 78 to determine the nature of the robot 50 actions.”) That is, when a robot is determined to be too close to a human, the safety zone can be adjusted to control the robot to move further away or slow down or the like. Paragraph [0022] is referenced to describe the adjustability of the safety zones, and that there are three different areas within a dynamic zone (“zone 78 can be volumetrically extended around the component 52 as needed. The dynamic safe zone 78 is composed of three different areas, a stop zone 82, a slow zone 84, and a normal monitored zone 86.”) This means that a robot safety zone can force a robot to change its speed, or stop. If this action is not desired, then the safety zone needs to be changed so the robot does not need to slow or stop. This requires adapting the safety zone in order for the robot to avoid any person or obstacle that it encounters that would be within the stop or slow zones. Examiner further includes paragraph [0053], which also defines safety borders in relation to the robot based on the obstacle position, the safety border being associated with a condition to be fulfilled by the robot. Cole paragraph [0053] recites: [0053] The technique described in item 4) will be understood to include the ability to assess robot operations that have the dynamic safety zones (e.g. zones implemented using any one or a combination of the embodiments described above) and adjust motion planning to account for the effect of those zones. For example, if a robot is determined to have relatively large dynamic safety zones that are routinely “tripped” by regular and predictable intrusion of a person or foreign object, then the motion of the robot can be adjusted to account for that regular occurrence. For example, if a person enters the robot zone always on the robot's left side between the hours of 3:30 pm and 4:30 pm during a shift change, the robot can be programmed with a modified motion profile during those times and/or reprogrammed entirely to avoid that issue. If the robot routinely encroaches upon a concrete pillar in the workspace which causes routine slowdown of the robot as it traverses that space, the motion profile can likewise be adjusted. The motion profile can be adjusted using machine learning, but can also be adjusted manually by an operator. As such, Examiner maintains the prior art rejections of claims 1-17 and 19-21. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADISON B EMMETT whose telephone number is (303)297-4231. The examiner can normally be reached Monday - Friday 9:00 - 5:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tommy Worden can be reached at (571)272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MADISON B EMMETT/Examiner, Art Unit 3658 /JASON HOLLOWAY/Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Aug 09, 2025
Non-Final Rejection — §101, §102
Nov 10, 2025
Response Filed
Feb 07, 2026
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594945
DETECTION AND REMEDIATION OF AN INSTABILITY CONDITION IN A VEHICLE-TRAILER SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583108
GRASP SELECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12573296
ROAD INFORMATION DISPLAY SYSTEM AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12572162
SYSTEM AND METHOD FOR PRECISE FORCE CONTROL OF ROBOT
2y 5m to grant Granted Mar 10, 2026
Patent 12559122
STEERING INPUT WITH LIGHT SOURCE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
90%
With Interview (+11.4%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 158 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month