Prosecution Insights
Last updated: April 19, 2026
Application No. 17/674,123

CONTROL DEVICE FOR CONTROLLING A TECHNICAL SYSTEM, AND METHOD FOR CONFIGURING THE CONTROL DEVICE

Non-Final OA §101§103
Filed
Feb 17, 2022
Examiner
AHMED, ISTIAQUE
Art Unit
2116
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Aktiengesellschaft
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
86%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
134 granted / 194 resolved
+14.1% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
22 currently pending
Career history
216
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 194 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is in response to the communication filed on 12/05/2025 Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered. Response to Arguments Applicant's arguments filed 12/05/2025, with respect to the rejection(s) of claim(s) 1-13 35 U.S.C. § 101 have been fully considered but they are not persuasive. Applicant in page 6-7 argues, “Contrary to the Examiner's assertion, amended claim 1 does not recite a mental process. Examining whether a limit value for an operating parameter would be exceeded in the present state if the control action signal were applied requires real-time automated monitoring and control of physical technical systems with actual operating parameters such as temperature, pressure, speed, and emissions. This examination must be performed in real-time during control system operation to prevent limit values from being exceeded, which cannot practically be performed by a human mind with pen and paper. The specification describes that "state signals ZS that specify a respective present state of the technical system TS are transmitted from the technical system TS to the control device CTL" and that "admissible control action signals AS are transmitted from the control device CTL to the technical system TS in order to control the system in an optimized and safety-compliant fashion."3 The examination involves state-specific limit values for operating parameters of technical systems, which are physical constraints of real systems, not abstract mental concepts.” Examiner respectfully disagrees. The claim doesn’t recite or require, “real-time automated monitoring and control of physical technical systems with actual operating parameters such as temperature, pressure, speed, and emissions”. Additionally, the claim or the specification does not place any specific limitation on the time required to process the information or perform the claimed steps, for the claimed steps not to be practically performed in the human mind. Applicant in page 7 argues, “The limitation "wherein the safety module examines whether a limit value for an operating parameter would be exceeded in the present state if the control action signal were applied" as recited by amended claim 1 provides a specific mechanism for how the conversion is accomplished. This directly addresses the Examiner's assertion that the conversion limitation is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result.4 The amendment specifies that the mechanism involves examining whether operating parameter limit values would be exceeded, which is a concrete technical process. For instance, the specification describes that operating parameters include "physical, chemical, control-oriented, effect-oriented and/or design-dependent operating parameters" such as "temperature, pressure, emissions, vibrations, vibrational states or resource consumption of the technical system."5 The specification further explains that "[t]he performance to be optimized can relate in particular to a capacity, a yield, a velocity, an operating period, a precision, an error rate, an error scale, a resource requirement, an efficiency, a pollutant emission, a stability, a wear, a life and/or other target parameters of the technical system TS."6 Thus, amended claim 1 recites a specific technical process for ensuring safety-compliant control of technical systems, not merely applying an abstract idea to a field of use. The examination of whether operating parameter limits would be exceeded is integral to the control process, not an incidental addition.” Examiner respectfully disagrees. With regards to aspects that are cited from the specification above, these aspects are not in the claim. For a claimed invention to provide improvement the claim itself needs to reflect the disclosed improvement. Therefore applicant’s argument regarding improving the safety and performance of physical technical systems based on aspect that are recited in the specification is not persuasive. With regards to “The limitation "wherein the safety module examines whether a limit value for an operating parameter would be exceeded in the present state if the control action signal were applied" as recited by amended claim 1 provides a specific mechanism for how the conversion is accomplished.” This limitation is directed to a mental step. Human mind is capable of determining whether a limit value for an operating parameter would be exceeded if certain control action signal were applied. It is important to note, the judicial exception alone cannot provide the improvement. Therefore recitation of this limitation does not provide an improvement. Applicant in page 8 recites, “Furthermore, amended claim 1 provides a technological improvement by enabling machine learning modules to be trained while ensuring that operating parameter limits are not exceeded during training and operation. The claim improves how control devices are configured for technical systems by integrating safety examination directly into the training process. As described in the specification, "an output signal of the machine learning module is supplied to the safety module" and "the output signal is converted into an admissible control action signal by the safety module on the basis of the safety information depending on the state signal." The Examiner incorrectly concludes that controlling the technical system on a basis of an admissible control signal merely indicates a field of use and is an incidental or token addition to the claim that did not alter or affect how the rest of the methods are performed. However, the amendment demonstrates that this is not merely a field of use. The amended claim 1 recites specific mechanisms that are integral to controlling physical technical systems. This examination is not incidental but is fundamental to the safety-compliant control process.” Examiner respectfully disagrees. With regards to the technological improvement, for a claimed invention to provide improvement, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement, the claim must include the components or steps of the invention that provide the improvement described in the specification and the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. (See MPEP 2106.05(a)) Applicant is advised to point to portion of specification that provide sufficient details regarding a claimed limitation providing an improvement, aside from the limitations that are identified as judicial exception. With regards to applicant’s argument that, the amendment demonstrates that this is not merely a field of use, examiner disagrees. Controlling the technical system on a basis of “an admission control signal” does not apply the admissible control action signal that was generated safety module in step (d), since it does not refer to that control signal and neither does it alter or affect how the process of converting output signal and ascertaining a performance for control of the technical system is performed. Applicant in page 8-9 argues, “Amended claim 1 further recites "wherein the safety information comprises state-specific rules, conditions and/or limit values for control action signals" as supported by the specification. This amendment further integrates the claimed invention into a practical application by specifying the technical content of the safety information used by the safety module. The recitation of state specific rules, conditions and/or limit values for control action signals demonstrates that the safety information is not abstract data, but rather comprises concrete technical constraints that are specific to particular states of the technical system.” Examiner respectfully disagrees. The terms, “state specific rules”, “conditions” and “limit values for control action signals” are broad terms and are not do not particularly point to any specific technology elements or terms, therefore they are do not impose any “concrete technical constraint”. Applicant's arguments filed 12/05/2025, with respect to the rejection of claim(s) 1, 3-9 and 11-13 under 35 U.S.C. § 103 have been fully considered but they are not persuasive. Applicant in page 10 argues, “While Kalabic teaches state-specific constraints and determining whether constraints will be violated, Kalabic does not teach examining whether a limit value for an operating parameter would be exceeded in the present state. As shown in Kalabic, "[i]f a solution does not exist, it means that constraints will very likely be violated; therefore the supervisor sets the penalty c(t) to the maximum penalty 242 and modifies the command received from the RL controller and passes the modified command to the system 243." 12 This approach focuses on determining whether constraint violations will occur and modifying commands accordingly, which is fundamentally different from examining whether a limit value for an operating parameter would be exceeded in the present state.” Examiner respectfully disagrees. Kalabic in Fig. 2A and ¶0041 teaches, supervisor receives command 206 from RL control and “transmits a safe command 216 in case the command 206 was deemed unsafe.” ¶0049 teaches, When a command is deemed unsafe, it means that applying it will lead to constraint violation. ¶0048 teaches, The supervisor obtains the state 240 and attempts to solve the (SO) problem 241. If a solution exists, the supervisor passes the command received from the RL controller to the system 245. If a solution does not exist, it means that constraints will very likely be violated; therefore the supervisor and modifies the command received from the RL controller and passes the modified command to the system 243. Furthermore, ¶0059 teaches, converting optimal command to test command, checking for safety and generate safe actuator command if the test command is unsafe. Therefore it teaches supervisor (safety module) examines whether a constraint (a limit value for an operating parameter) will be exceeded if control command is applied. Applicant in page 11-12 argues, “Additionally, Kalabic in view of Nishi fails to teach or render obvious, "wherein the safety information comprises state-specific rules, conditions and/or limit values for control action signals." 15 While Kalabic teaches state-specific constraints related to control invariant sets, Kalabic does not teach safety information that comprises state-specific rules, conditions and/or limit values for control action signals as recited in amended claim 1. …… Nishi's safety requirements are general constraints that command requirements to be satisfied in the execution process, not state-specific rules, conditions and/or limit values for control action signals as required by amended claim 1. Nishi's approach involves setting operation-time constraints and safety requirements, then determining whether planned operations can achieve expected operations while satisfying these requirements. This is distinct from safety information that comprises state-specific rules, conditions and/or limit values for control action signals.” Examiner respectfully disagrees. Under the broadest reasonable interpretation the claimed limitation doesn’t require any specific state, safety-information, rules, conditions or limit values. These terms are recited broadly and can encompass various different parameters. Kalabic in ¶0050 teaches state specific safety constraints and control input constraints (limit values for control action signals). However, it doesn’t teach reading in the constraint. Nishi in ¶0097 teaches, an autonomous control device 02 inputs a safety requirement 026. ¶0043 teaches, safety requirement 026 is a constraint commanding the requirement to be satisfied in the execution process(limit values for control action signals). Therefore the combination of Kalabic and Nishi teaches, reading in safety information about an admissibility of a control action signal, which safety information is specific to a state of the technical system, by a safety module, wherein the safety information comprises state-specific rules, conditions and/or limit values for control action signals. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 is directed towards the four statutory categories in that it recites a method. Claim 1 recites, “ converting the output signal into an admissible control action signal by the safety module on a basis of the safety information depending on the state signal, wherein the safety module examines whether a limit value for an operating parameter would be exceeded in the present state if the control action signal were applied, e) ascertaining a performance for control of the technical system by the admissible control action signal;” These limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer. That is, other than a computer being claimed as performing this function, nothing in the claim element precludes the step from practically being performed in the mind. For example, with regards to, converting the output signal into an admissible control action signal by the safety module on a basis of the safety information depending on the state signal, wherein the safety module examines whether a limit value for an operating parameter would be exceeded in the present state if the control action signal were applied, human mind mentally or with pen and paper is capable of converting output signal (i.e. information) to admissible signal (i.e. information) on the basis of safety information and state signal (i.e. information). Similarly human mind is also capable of determining whether certain operating parameter would exceed if the control action signal is applied (i.e. output signal). Merely executing these steps in a computer environment does not take the claim limitation out of the mental processes grouping. (see MPEP 2106.04(a)(2)(III)(C). With regards to, ascertaining a performance for control of the technical system by the admissible control action signal, without any specific limitation narrowing the steps of ascertaining the performance, human mind is capable performing the function ascertaining a performance of a technical system. The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind. (see MPEP 2106.04(a)(2)) The mere nominal recitation of a generic computer to perform this determination does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process. This judicial exception is not integrated into a practical application. Claim recites additional elements directed to a) reading in safety information about an admissibility of a control action signal, which safety information is specific to a state of the technical system, by a safety module, wherein the safety information comprises state-specific rules, conditions and/or limit values for control action signals; b )supplying a state signal indicating a state of the technical system to a machine learning module and to the safety module; c) supplying an output signal of the machine learning module to the safety module; f) training the machine learning module to optimize the performance; and g) controlling the technical system on a basis of an admissible control signal that is output by the safety module, using the control device configured on a basis of the trained machine learning module. Limitations directed to “reading in safety information about an admissibility of a control action signal, which safety information is specific to a state of the technical system”, “supplying a state signal indicating a state of the technical system to a machine learning module and to the safety module” and “supplying an output signal of the machine learning module”, under broadest reasonable interpretation, are directed to mere data gathering and transmission and insignificant extra solution activity for the purpose of executing the abstract idea. Therefore, these limitations do not integrate a judicial exception. (see MPEP 2106.05(g)). Limitation directed to “training the machine learning module to optimize the performance” does not describe any steps taken to train the machine learning module and therefore is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)). Limitation directed to “controlling the technical system on a basis of an admissible control signal that is output by the safety module, using the control device configured on a basis of the trained machine learning module.”, merely indicates a field of use of the abstract idea (i.e. a technical system). This is an incidental or token addition to the claim that did not alter or affect how the rest of the methods are performed. Therefore this limitation amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use and this fails to integrate the judicial exception into a practical application. (see MPEP 2106.05(h)) The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim recites additional elements directed to a) reading in safety information about an admissibility of a control action signal, which safety information is specific to a state of the technical system, by a safety module wherein the safety information comprises state-specific rules, conditions and/or limit values for control action signals; b )supplying a state signal indicating a state of the technical system to a machine learning module and to the safety module; c) supplying an output signal of the machine learning module to the safety module; d) converting the output signal into an admissible control action signal by the safety module on a basis of the safety information depending on the state signal, f) training the machine learning module to optimize the performance; and g) controlling the technical system on a basis of an admissible control signal that is output by the safety module, using the control device configured on a basis of the trained machine learning module. Limitations directed to “reading in safety information about an admissibility of a control action signal, which safety information is specific to a state of the technical system”, “supplying a state signal indicating a state of the technical system to a machine learning module and to the safety module” and “supplying an output signal of the machine learning module”, under broadest reasonable interpretation, are directed to mere data gathering and transmission. These elements are recited in a generic manner and are directed to activity that are well-understood, routine and conventional in the field of computer implemented processes. Courts have found receiving and transmitting data (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 and buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)) to be well‐understood, routine, and conventional when recited as insignificant extra-solution activity (see MPEP 2106.05(d). Therefore, these limitations do not provide significantly more than the judicial exception. (see MPEP 2106.05(d)). Limitation directed to “training the machine learning module to optimize the performance” does not describe any steps taken to train the machine learning module and therefore is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore and do not provide significantly more than the judicial exception (See MPEP 2106.05(f)). Limitation directed to “controlling the technical system on a basis of an admissible control signal that is output by the safety module, using the control device configured on a basis of the trained machine learning module.”, merely indicates a field of use of the abstract idea (i.e. a technical system). This is an incidental or token addition to the claim that did not alter or affect how the rest of the methods are performed. Therefore this limitation amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use and do not provide significantly more than the judicial exception. (see MPEP 2106.05(h)) Claim 2 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim 2 further recites, wherein a backpropagation method is used to train the machine learning module, the backpropagation method involving a performance signal that quantifies the performance being backpropagated from an output of the safety module to an input of the safety module and a resulting performance signal furthermore being backpropagated from an output of the machine learning module to an input of the machine learning module. This limitation merely indicates using a backpropagation method to train the machine learning module by backpropagating the signals from output to input. It provides a result oriented solution and lacks the details of how the machine learning module is trained based on the backpropagated signal. This limitation amounts to a mere instruction to apply an exception and therefore and does not integrate a judicial exception into a practical application or provide significantly more.(See MPEP 2106.05(f)). Claim 3 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim further recites, wherein the safety module uses the safety information to examine whether the output signal is admissible as a control action signal, and in that the output signal is converted into the admissible control action signal on the basis of the examination result. With regards to “the safety module uses the safety information to examine whether the output signal is admissible as a control action signal”, this is a step that can be practically performed by human mind. That is human mind is capable of determining whether an output signal is admissible based on some safety information. Thus, this limitation recites a mental process. Claim also recites, “the output signal is converted into the admissible control action signal on the basis of the examination result.”. This limitation doesn’t recite any restriction on how the signal is converted and thus limitation is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore do not integrate the judicial exception into a practical application or provide significantly more than the judicial exception (See MPEP 2106.05(f)). Claim 4 depends on claim 3 and 1 therefore it recites the abstract idea of claim 3 and 1. Claim further recites, wherein if the output signal is admissible as a control action signal, the output signal is output by the safety module as an admissible control action signal, and otherwise the output signal is converted into the admissible control action signal. With regards to “the output signal is output by the safety module as an admissible control action signal”, under broadest reasonable interpretation, this limitation is directed to mere data transmission and insignificant extra solution activity for the purpose of executing the abstract idea. Therefore, these limitations do not integrate a judicial exception. (see MPEP 2106.05(g)). Furthermore element is recited in a generic manner and are directed to activity that are well-understood, routine and conventional in the field of computer implemented processes. Courts have found receiving and transmitting data (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 and buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)) to be well‐understood, routine, and conventional when recited as insignificant extra-solution activity (see MPEP 2106.05(d). with regards to “output signal is converted into the admissible control action signal”, this limitation doesn’t recite any restriction on how the signal is converted and thus limitation is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore do not integrate the judicial exception into a practical application or provide significantly more than the judicial exception (See MPEP 2106.05(f)). Claim 5 depends on claim 3 and 1 therefore it recites the abstract idea of claim 3 and 1. Claim further recites, wherein the safety information indicates or encodes an admissible, state-specific default control action signal, and in that the output signal is converted into the admissible default control action signal on the basis of the examination result. With regards to “the safety information indicates or encodes an admissible, state-specific default control action signal”. This limitation merely recites the type of data received in the “reading in safety information”. Merely reciting the received data to have a particular content, does not exclude it from insignificant extra-solution nature of the activity neither does it take it out of well‐understood, routine, and conventional nature of the activity. With regards to “the output signal is converted into the admissible default control action signal on the basis of the examination result” , this limitation doesn’t recite any restriction on how the signal is converted and thus limitation is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore do not integrate the judicial exception into a practical application or provide significantly more than the judicial exception (See MPEP 2106.05(f)). Claim 6 depends on claim 3 and 1 therefore it recites the abstract idea of claim 3 and 1. Claim 6 further recites, “wherein a volume of training data available for a state specified by the state signal is ascertained for this state, and in that the examination for admissibility of the output signal is performed on the basis of the ascertained volume.” without any specific steps needed to ascertain the volume of the data and to perform the examination of admissibility, human mind is capable of ascertaining a volume of certain data and determine admissibility based on the volume of data available. Therefore the limitation recites a mental process. Claim 7 depends on claim 3 and 1 therefore it recites the abstract idea of claim 3 and 1. Claim further recites, wherein a forecast error or modelling error of the machine learning module is ascertained for a state specified by the state signal, and in that the examination for admissibility of the output signal is performed on the basis of the ascertained forecast error or modelling error. without any specific steps needed to ascertain the forecast error or modelling error and to perform the examination of admissibility, human mind is capable of ascertaining a forecast or modeling error and determine admissibility based on the volume of data available. Therefore the limitation recites a mental process. Claim 8 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim further recites, wherein the safety information configures, indicates or encodes a transformation function, in that the output signal and the state signal are supplied to the transformation function, and in that the output signal is converted into the admissible control action signal by the transformation function on the basis of the state signal.. With regards to “the safety information configures, indicates or encodes a transformation function,”. This limitation under broadest reasonable interpretation recites, the safety information indicates a transformation function and thus merely recites the type of data received in the “reading in safety information”. Merely reciting the received data to have a particular content, does not exclude it from insignificant extra-solution nature of the activity neither does it take it out of well‐understood, routine, and conventional nature of the activity. With regards to “output signal is converted into the admissible control action signal by the transformation function on the basis of the state signal” this limitation is directed to using a mathematical relationship (i.e. transformation function) to convert output signal into a admissible control signal. Therefore the limitation is directed to mathematical concept grouping of abstract idea. Claim 9 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim further recites, wherein the technical system is controlled by the admissible control action signal, in that a behavior of the technical system controlled in this way is detected, and in that the performance is derived from the detected behavior. This limitation doesn’t recite any restriction on how the behavior of the technical system is detected and what process is used in calculating the performance, and thus limitation is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore do not integrate the judicial exception into a practical application or provide significantly more than the judicial exception (See MPEP 2106.05(f)). Claim 10 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim further recites, wherein a behavior of the technical system controlled by the admissible control action signal is simulated, predicted and/or read in from a database, and in that the performance is derived from the simulated, predicted and/or read-in behavior. With regards to “wherein a behavior of the technical system controlled by the admissible control action signal is simulated, predicted and/or read in from a database”, this limitation under broadest reasonable interpretation is directed to receiving information from a database which is directed to mere data gathering and is an insignificant extra solution activity for the purpose of executing the abstract idea. Therefore, these limitations do not integrate a judicial exception. (see MPEP 2106.05(g)). Furthermore element is recited in a generic manner and are directed to activity that are well-understood, routine and conventional in the field of computer implemented processes. Courts have found receiving and transmitting data (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 and buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)) to be well‐understood, routine, and conventional when recited as insignificant extra-solution activity (see MPEP 2106.05(d). With regards to, “the performance is derived from the simulated, predicted and/or read-in behavior”, this limitation doesn’t recite any restriction on how the what process is used in deriving the performance, and thus limitation is directed to reciting an idea of an outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. This limitation amounts to a mere instruction to apply an exception and therefore do not integrate the judicial exception into a practical application or provide significantly more than the judicial exception (See MPEP 2106.05(f)). Claim 11 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim further recites, A control device for controlling a technical system, configured to carry out a method as claimed in claim 1. This limitation merely recites using a control device as a tool to execute the abstract idea. This amounts to simply adding a general purpose computer or computer components after the fact to an abstract idea which does not integrate a judicial exception into a practical application or provide significantly more. (see MPEP 2106.05(f)) Claim 12 depends on claim 1 therefore it recites the abstract idea of claim 1. Claim further recites, A computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement the method as claimed in claim 1.. This limitation merely recites using computer components as a tool to execute the abstract idea. This amounts to simply adding a general purpose computer or computer components after the fact to an abstract idea which does not integrate a judicial exception into a practical application or provide significantly more. (see MPEP 2106.05(f)) Claim 13 depends on claim 12 and 1 therefore it recites the abstract idea of claim 12 and 1. Claim further recites, A computer-readable storage medium having a computer program product as claimed in claim 12. This limitation merely recites using computer components as a tool to execute the abstract idea. This amounts to simply adding a general purpose computer or computer components after the fact to an abstract idea which does not integrate a judicial exception into a practical application or provide significantly more. (see MPEP 2106.05(f)) Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: controlling the technical system on a basis of an admissible control signal that is output by the safety module, using the control device configured on a basis of the trained machine learning module. in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: Published specification ¶0008 recites, control device according to embodiments of the invention can be for example embodied, or implemented, by one or more computers, processors, application-specific integrated circuits (ASIC), digital signal processors (DSP) and/or so-called “field programmable gate arrays” (FPGA). Therefore the control device is being interpreted to cover computers, processors, application-specific integrated circuits (ASIC), digital signal processors (DSP) and/or so-called “field programmable gate arrays” (FPGA) or equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-9 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kalabic et al. (US20210049501A1) hereinafter Kalabic modified in view of Nishi (US20180032079A1) hereinafter Nishi. Regarding claim 1, Kalabic teaches, A computer-implemented method for configuring a control device for a technical system, wherein b)supplying a state signal indicating a state of the technical system to a machine learning module and to the safety module; (¶0033 teaches “The RL controller receives a feedback signal 112 from the system which is generally a function of both the system state and command vectors”. ¶0041 teaches “The supervisor 203 obtains the state measurement”) c) supplying an output signal of the machine learning module to the safety module; (¶0041 and Fig. 2A teaches RL controller supplies command 206 to Supervisor) d) converting the output signal into an admissible control action signal by the safety module on a basis of the safety information depending on the state signal, wherein the safety module examines whether a limit value for an operating parameter would be exceeded in the present state if the control action signal were applied, (In view of the specification ¶0045 and ¶0048 control action signal is being interpreted as the output signal. Kalabic in Fig. 2A and ¶0041 teaches, supervisor receives command 206 from RL control and “transmits a safe command 216 in case the command 206 was deemed unsafe.” ¶0049 teaches, When a command is deemed unsafe, it means that applying it will lead to constraint violation. ¶0048 teaches, The supervisor obtains the state 240 and attempts to solve the (SO) problem 241. If a solution exists, the supervisor passes the command received from the RL controller to the system 245. If a solution does not exist, it means that constraints will very likely be violated; therefore the supervisor and modifies the command received from the RL controller and passes the modified command to the system 243. Furthermore, ¶0059 teaches, converting optimal command to test command, checking for safety and generate safe actuator command if the test command is unsafe. Therefore it teaches supervisor (safety module) examines whether a constraint (a limit value for an operating parameter) will be exceeded if control command is applied.) e) ascertaining a performance for control of the technical system by the admissible control action signal; (The RL controller receives a feedback signal 112 from the system which is generally a function of both the system state and command vectors) f) training the machine learning module to optimize the performance; and (¶0033 teaches, The controller modifies the command according to the feedback.¶0050 teaches, determine a reward for a quality of the control policy on the state of the machine using a reward function of the sequence of control inputs and the sequence of states of the machine augmented with an adaptation term determined as the minimum amount of effort needed for the machine having the state to remain within the CIS; and update the control policy that improves a cost function of operation of the machine according to the determined reward.) g) controlling the technical system on a basis of an admissible control signal that is output by the safety module, using the control device configured on a basis of the trained machine learning module. (¶0065 teaches “jointly control the machine and update the control policy, wherein, for performing the joint control and update, wherein the iteratively performing step comprises controlling the machine using the control policy”) Kalabic doesn’t explicitly teach, a) reading in safety information about an admissibility of a control action signal, which safety information is specific to a state of the technical system, by a safety module, wherein the safety information comprises state-specific rules, conditions and/or limit values for control action signals; (Kalabic in ¶0050 teaches state specific safety constraints and control input constraints (limit values for control action signals). However, it doesn’t teach reading in the constraint. Nishi in ¶0097 teaches, an autonomous control device 02 inputs a safety requirement 026. ¶0043 teaches, safety requirement 026 is a constraint commanding the requirement to be satisfied in the execution process(limit values for control action signals)) Nishi is an art in the area of interest as it teaches, operation verification device for monitoring operation safety of an autonomous system. A combination of Nishi with Kalabic would allow the combined system to read in safety constraints. Kalabic already teaches a state specific safety constraints and Nishi teaches receiving safety constraints. It would have been obvious to one of ordinary still in the art to include in the safety system of Kalabic the ability to reading in the constraints as taught by Nishi since the claimed invention is merely a combination of old elements. In the combination each element merely would have performed the same function as it did separately, since Nishi’s teaching of reading in the safety constraints would not affect the functions taught by Kalabic regarding modifying the command based on safety constraint. One of ordinary skill in the art would have recognized that the results of the combination were predictable Regarding claim 3, Kalabic and Nishi teaches, The method as claimed in claim 1, wherein the safety module uses the safety information to examine whether the output signal is admissible as a control action signal, and in that the output signal is converted into the admissible control action signal on the basis of the examination result. (Kalabic in ¶0059 teaches, “The optimal command 505 is generated by the policy 503. The algorithm adds colored noise 507 to the optimal command to determine the test command and checks safety 509 by solving the (SO) problem. As a result of solving the (SO) problem, a safety margin 511 is obtained, which is set to the maximum penalty if a solution does not exist. If a solution does exist, it means that the test command is safe and it is passed as the actuator command 517; if a solution does not exist, it means that the test command is unsafe, so the algorithm generates a random, safe actuator command.”) Regarding claim 4, Kalabic and Nishi teaches, The method as claimed in claim 3, wherein if the output signal is admissible as a control action signal, the output signal is output by the safety module as an admissible control action signal, and otherwise the output signal is converted into the admissible control action signal. (Kalabic in ¶0048 teaches, The supervisor obtains the state 240 and attempts to solve the (SO) problem 241. If a solution exists, the supervisor sets the penalty c(t) to the solution of the problem 244 and passes the command received from the RL controller to the system 245. If a solution does not exist, it means that constraints will very likely be violated; therefore the supervisor sets the penalty c(t) to the maximum penalty 242 and modifies the command received from the RL controller and passes the modified command to the system 243.) Regarding claim 5, Kalabic and Nishi teaches, The method as claimed in claim 3, wherein the safety information indicates or encodes an admissible, state-specific default control action signal, and in that the output signal is converted into the admissible default control action signal on the basis of the examination result. (Kalabic in ¶0044 teaches, “we compute the control invariant set (CIS), which is the set of all system states x(t) for which there exists a command u(t) that would return the state into the CIS according to the system dynamics and satisfy the set-membership constraint Sy(t)≤s.” ¶0058 teaches, “Before a command can be passed to the actuator, it needs to be checked by the supervisor 203 and modified to adhere to safety constraints if it is determined to violate safety constraints”) Regarding claim 6, Kalabic and Nishi teaches, The method as claimed in claim 3, wherein a volume of training data available for a state specified by the state signal is ascertained for this state, and in that the examination for admissibility of the output signal is performed on the basis of the ascertained volume. (Kalabic in ¶0065 teaches, “accepting data indicative of a state of the machine, computing a safety margin of a state and action pair satisfying the state constraints and a control policy mapping the state of the machine within a control invariant set (CIS) to a control input satisfying the control input constraints” and also teaches, “controlling the machine using the control policy to collect data including a sequence of control inputs generated using the control policy and a sequence of states of the machine corresponding to the sequence of control inputs determining a reward for a quality of the control policy on the state of the machine using a reward function of the sequence of control inputs and the sequence of states of the machine augmented with an adaptation term determined as the minimum amount of effort needed for the machine having the state to remain within the CIS”) Regarding claim 7, Kalabic and Nishi teaches, The method as claimed in claim 3, wherein a forecast error or modelling error of the machine learning module is ascertained for a state specified by the state signal, and in that the examination for admissibility of the output signal is performed on the basis of the ascertained forecast error or modelling error. (Kalabic in ¶0048 teaches, “The supervisor obtains the state 240 and attempts to solve the (SO) problem 241. If a solution exists, the supervisor sets the penalty c(t) to the solution of the problem 244 and passes the command received from the RL controller to the system 245. If a solution does not exist, it means that constraints will very likely be violated; therefore the supervisor sets the penalty c(t) to the maximum penalty 242 and modifies the command received from the RL controller and passes the modified command to the system 243.”) Regarding claim 8, Kalabic and Nishi teaches, The method as claimed in claim 1, wherein the safety information configures, indicates or encodes a transformation function, in that the output signal and the state signal are supplied to the transformation function, and in that the output signal is converted into the admissible control action signal by the transformation function on the basis of the state signal. (Kalabic in ¶0048 and Fig. 2B teaches a transformation function which modifies command sent from RL controller. Also teaches the functions solves optimization problem (SO). ¶0041-¶0043 teaches the optimization problem is solved by taking x(t) which is a vector of system states and u(t), which is obtained from the RL controller) Regarding claim 9, Kalabic and Nishi teaches, The method as claimed in claim 1, wherein the technical system is controlled by the admissible control action signal, in that a behavior of the technical system controlled in this way is detected, and in that the performance is derived from the detected behavior. (Kalabic in ¶0059 teaches, “The safe actuator command is passed to the system 519 which returns a feedback signal 521 via measurement devices”) Regarding claim 11, Kalabic and Nishi teaches, A control device for controlling a technical system, configured to carry out a method as claimed in claim 1. (Kalabic in ¶0064 teaches a control device 600 for carrying out the operation of the method) Regarding claim 12, Kalabic and Nishi teaches, A computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement the method as claimed in claim 1. (Kalabic in ¶0064 teaches a control device 600 for carrying out the operation of the method comprising a storage device 630 which includes a reinforcement learning (RL) algorithm (program) 631, a supervisor algorithm 633, a reward function, cost function, and maximum penalty parameters for the RL and supervisor algorithms 634, inequalities describing the constraints 632 on the system 600, and inequalities describing the zero-effort set 635) Regarding claim 13, Kalabic and Nishi teaches, A computer-readable storage medium having a computer program product as claimed in claim 12. (Kalabic in ¶0064 teaches ¶0064 teaches a control device 600 for carrying out the operation of the method comprising a storage device 630 which includes a reinforcement learning (RL) algorithm (program) 631, a supervisor algorithm 633, a reward function, cost function, and maximum penalty parameters for the RL and supervisor algorithms 634, inequalities describing the constraints 632 on the system 600, and inequalities describing the zero-effort set 635) Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kalabic et al. (US20210049501A1) hereinafter Kalabic modified in view of Nishi (US20180032079A1) hereinafter Nishi and further in view of Pietquin (US20200151562A1) hereinafter Pietquin. Regarding claim 2, Kalabic and Nishi doesn’t explicitly teach, The method as claimed in claim 1, wherein a backpropagation method is used to train the machine learning module, the backpropagation method involving a performance signal that quantifies the performance being backpropagated from an output of the safety module to an input of the safety module and a resulting performance signal furthermore being backpropagated from an output of the machine learning module to an input of the machine learning module. (Kalabic in ¶0056 teaches The RL algorithm we apply is the deep-deterministic policy gradient (DDPG) algorithm due to its ability to deal with continuous control systems. DDPG learns both a critic network to estimate the long-term value for a given policy and an actor network to sample the optimal action. However it doesn’t teach using a backpropagation method. Pietquin in fig. 4 and ¶0075 teaches an example training process for RL system. ¶0075 teaches The procedure may then employ a gradient update technique to backpropagate the policy gradient to train the actor neural network and a gradient of the critic loss to train the critic neural network. Pietquin is an art in the area of interest as it teaches machine learning models (see ¶0005). A combination of Kalabic with Pietquin would allow Kalabic’s RL model to be trained using a backpropagation method. Kalabic already teaches training RL algorithm using a deep-deterministic policy gradient actor-critic network. Pietquin teaches employ a gradient update technique to backpropagate the policy gradient to train the actor neural network and a gradient of the critic loss to train the critic neural network. The claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kalabic et al. (US20210049501A1) hereinafter Kalabic modified in view of Nishi (US20180032079A1) hereinafter Nishi and further in view of Fujimoto (US20210192344A1) hereinafter Fujimoto Regarding claim 10, Kalabic and Nishi doesn’t explicitly teach, The method as claimed in claim 1, wherein a behavior of the technical system controlled by the admissible control action signal is simulated, predicted and/or read in from a database, and in that the performance is derived from the simulated, predicted and/or read-in behavior. (Although Kalabic in ¶0059 teaches receiving a feedback signal, it doesn’t explicitly teach receiving feedback signal from a database. Fujimoto in ¶0040 teaches storage unit 108 temporarily stores the feedback data output from the sensor unit 101. ¶0043 teaches, The data input unit 213 acquires the feedback data stored in the storage unit 108, and executes preprocessing of data. The data input unit 213 performs various kinds of processes so that a machine learning algorithm readily processes the feature of driving input and the motion state of the vehicle, which are input as the feedback data. An example of the process includes a process of processing the feedback data to the maximum or minimum value of the feedback data within a predetermined period. By processing the feedback data in advance, the processing efficiency and the learning efficiency can be improved, as compared with a case in which the machine learning algorithm processes the raw feedback data directly.) Fujimoto is an art in the area of interest as it teaches, controlling a system using reinforcement learning. A combination of Fujimoto with Kalabic would allow the system to read in feedback data from database and derive performance from feedback data. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Fujimoto with Kalabic. One would have been motivated to do so because doing so would improve processing efficiency and the learning efficiency of the feedback data, as taught by Fujimoto in ¶0043 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Fulton (US20220261630A1) in Fig. 7 and ¶0110 teaches an example of a safety system 710. ¶0112 teaches, The reinforcement learning model 705 gives a set of actions P(at) in the present state. The safety system 710 then receives the symbolic constraints to as well as the action at selected by the reinforcement learning model 705 to update a dynamical safety constraint based on the symbolic state data and to filter the actions based on the dynamical safety constraint. For instance, safety system 700 may access whether at is a safe action, or whether a substitute action a′t is to be performed (e.g., the safety system may filter actions P(at) to execute action that are safe based on the symbolic constraints ot). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISTIAQUE AHMED whose telephone number is (571)272-7087. The examiner can normally be reached Monday to Thursday 10AM -6PM and alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, kenneth M Lo can be reached at (571) 272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ISTIAQUE AHMED/Examiner, Art Unit 2116 /KENNETH M LO/Supervisory Patent Examiner, Art Unit 2116
Read full office action

Prosecution Timeline

Feb 17, 2022
Application Filed
Jun 14, 2025
Non-Final Rejection — §101, §103
Sep 12, 2025
Response Filed
Sep 26, 2025
Final Rejection — §101, §103
Dec 05, 2025
Response after Non-Final Action
Jan 02, 2026
Request for Continued Examination
Jan 09, 2026
Response after Non-Final Action
Feb 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595925
AIR CONDITIONING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12541192
Operator Display Switching Preview
2y 5m to grant Granted Feb 03, 2026
Patent 12541804
POWER CONTROL DEVICE
2y 5m to grant Granted Feb 03, 2026
Patent 12535778
METHOD AND INTERNET OF THINGS (IoTs) SYSTEM FOR SMART GAS FIREFIGHTING LINKAGE BASED ON GOVERNMENT SAFETY SUPERVISION
2y 5m to grant Granted Jan 27, 2026
Patent 12480677
GENERATING DEVICE, SYSTEM, AND PROGRAM
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
86%
With Interview (+17.4%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 194 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month