Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/14/2026 has been entered.
Examiner’s Note
The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, the rejections noted below under 35 U.S.C § 101, for moving forward allowance.
Providing supporting paragraph(s) for each limitation of amended/new claim(s) in Remarks is strongly requested for clear and definite claim interpretations by Examiner.
Priority
Acknowledgment is made of applicant's claim for the present application filed on 11/24/2020.
Allowable Subject Matter
Claims 1-20 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) and 35 U.S.C. 101, and claim objections set forth in this Office action.
The following is a statement of reasons for the indication of allowable subject matter: Claims 1-20 are considered allowable since when reading the claims in light of the specification, none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specific in the independent claim including at least:
From independent claims 1, 9, 17:
receiving an action of a plurality of actions from Reinforcement Learning (RL) and a current state of an environment;
evaluating, using a Logical Neural Network (LNN) structure, an action safetyness logical inference based on the current state of the environment and a current action candidate from an agent;
outputting an upper bound and a lower bound on the action from RL, responsive to an evaluation of the action safetyness logical inference;
calculating a contradiction value for the action from RL by using the upper bound and the lower bound, wherein the contradiction value indicates a level of contradiction for each of a plurality of logic rules implemented by the LNN structure;
The closest prior art of record, Melnik et al. (Workflow scheduling using Neural Networks and Reinforcement Learning) discloses a workflow scheduling algorithm NNS (neural network scheduling) that is based on principles of artificial intelligence and reinforcement learning. The system analyzes workflow scheduling problem’s context and builds an encoder to provide vectored form of this context, and estimates tasks’ execution time based on parameters for workflow’s general structure, tasks’ characteristics, nodes and performance models.
Riegel et al. (Logical Neural Networks) discloses Logical Neural Networks (LNNs), a neuro-symbolic framework designed to simultaneously provide key properties of both neural nets (NNs) (learning) and symbolic logic (knowledge and reasoning) – toward direct interpretability, utilization of rich domain knowledge realistically, and the general problem-solving ability of a full theorem prover.
Zhu et al. (Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network) discloses a Markov decision process (MDP) formulation of the logic synthesis problem and a reinforcement learning (RL) algorithm incorporating with graph convolutional network to explore the solution search space. The system tries to push the synthesis results using the same action space of state-of-the-art heuristic to better results in terms of the number of nodes and logic depth.
Gauthier (Deep Reinforcement Learning for Synthesizing Functions in Higher-Order Logic) teaches a deep reinforcement learning framework based on self-supervised learning within a proof assistant. A close interaction between the machine learning modules and the proof assistant library is achieved by the choice of tree neural networks (TNNs) as machine learning models and the internal use of the proof assistant terms to represent tree structures of TNNs.
However, none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specific in the independent claim including at least:
the limitations recited above from the independent claim 1, 9, 17,
as in the claims for the purpose of developing a safe reinforcement learning for receiving an action and a current state of an environment, and a Logical Neural Network (LNN) structure for evaluating an action safetyness logical inference. The system selectively performs an action responsive to an evaluation of the action indicating that the action is safe to perform based on a contradiction value.
In addition, the dependent claim(s) is/are also considered allowable since the dependent claim(s) is/are dependent on the independent claim(s) above which is/are allowable.
Response to Arguments
Applicant's arguments filed on 01/14/2026 have been fully considered but they are not persuasive.
In Remarks, pp. 11-15, Applicant contends:
Present claim 1 includes patent eligible subject matter because the claims are not abstract Like in Enfish, where the claims were not abstract ideas because the claims as a whole were directed towards improving a computer database structure, present claim 1 recites and incorporates computer hardware elements to the alleged abstract idea and improves a computer by developing a reinforcement-learning technology that includes the application of logical conditions.
…
which makes the hardware processing unites use less computational resources by avoiding "unsafe" actions through a pre-evaluation of the safetyness of the given action instead of performing the action and then determining whether it is safe or not.
…
The incorporation of “one or more hardware processing units as a [LNN]" amended into the claim to evaluate the safetyness of the actions in the hardware transforms the claim to be outside the capabilities of a human.
…
present claim 1 is amended to recite, inter alia, "the plurality of neurons and connective edges of the LNN structure in a 1-to-1 correspondence with a the system of logical formulae and running a method to perform an action safeyness logical inference” (emphasis added) (Advisory Action, p. 3). This provides "'specific[ity] and detail[s]" adequate to overcome the rejection pursuant to 35 US,C, § 101 by further providing technical details to be more than an alleged abstract idea.
…
avoiding unsafe actions can allow for faster RL training which is an improvement to a technology (MPEP 2l06.5(a); 2106.0S(a)(xiii)). This improvement is reflected in present claim 1 at least as “selectively performing the action from RL responsive to an evaluation of the action from RL indicating that the action from RL is sate to perform based on the contradiction value exceeding a safetyness threshold.”
…
Therefore, “it [is] clear how the example of par 87 for usefulness can provide a technical improvement of achieving faster convergence in RL and/or less computational resources for safetyness in the claim.”
Examiner’s response:
The examiner understands the applicant’s assertion.
However, it appears that each processing step is just applying the abstract idea to a general field of endeavor with additional elements. In addition, improvements to technology or technical field are not necessarily reflected in the claims. Thus, the claim does not integrate the judicial exception into a practical application, and the claim does not amount to significantly more than the judicial exception.
The examiner understands the applicant’s assertion “present claim 1 recites and incorporates computer hardware elements to the alleged abstract idea and improves a computer by developing a reinforcement-learning technology that includes the application of logical conditions” and “The incorporation of “one or more hardware processing units as a [LNN]" amended into the claim to evaluate the safetyness of the actions in the hardware transforms the claim to be outside the capabilities of a human.”
However, as rejected under Claim Rejections - 35 USC § 101, in the 101 eligibility analysis, computer hardware elements may be considered additional elements (e.g., MPEP 2106.05(f)). The applicant may need to show why incorporating computer hardware elements helps provide improvements to overcome the current rejections. As discussed before, it appears that the LNN is implemented based on generic hardware processing units. It appears that the claim and/or the specification does not show a hardware LNN. Note that incorporating computer hardware elements does not always provide improvements.
The examiner understands the applicant’s assertion “which makes the hardware processing units use less computational resources by avoiding "unsafe" actions through a pre-evaluation of the safetyness of the given action instead of performing the action and then determining whether it is safe or not”.
However, it is still not clear how/why avoiding unsafe actions provides improvements. Practically, avoiding unsafe actions and running only safe actions may help make the hardware processing units use less computational resources. However, as discussed before, for example, pars 86-87 illustrate the example based on useful/useless actions (not safe/unsafe actions). Thus, it is not clear how the claimed invention can be applied to safe/unsafe actions based on e.g., the example in pars 86-87.
The examiner understands the applicant’s assertion “present claim 1 is amended to recite, inter alia, "the plurality of neurons and connective edges of the LNN structure in a 1-to-1 correspondence with a the system of logical formulae and running a method to perform an action safeyness logical inference” (emphasis added) (Advisory Action, p. 3). This provides "'specific[ity] and detail[s]" adequate to overcome the rejection pursuant to 35 USC § 101 by further providing technical details to be more than an alleged abstract idea.”
However, as rejected under Claim Rejections - 35 USC § 101, the limitation still may be rejected based on an abstract idea and additional elements since “the plurality of neurons and connective edges of the LNN structure [are] in a 1-to-1 correspondence with a the system of logical formulae” (e.g., #810 in fig 8) may be considered a particular type/source of model/data and “running a method to perform an action safeyness logical inference” may be considered a mental process.
The examiner understands the applicant’s assertion “avoiding unsafe actions can allow for faster RL training which is an improvement to a technology (MPEP 2l06.5(a); 2106.0S(a)(xiii)). This improvement is reflected in present claim 1 at least as “selectively performing the action from RL responsive to an evaluation of the action from RL indicating that the action from RL is sate to perform based on the contradiction value exceeding a safetyness threshold” and “Therefore, “the specification uses them relatedly, therefore ("LNNs-Shielding distinguishes whether the given action is safe (useful) or unsafe (useless)" Present Specification ¶ [0089] (emphasis added)). it [is] clear how the example of par 87 for usefulness can provide a technical improvement of achieving faster convergence in RL and/or less computational resources for safetyness in the claim.”
As discussed before, avoiding unsafe actions may allow for faster RL training since the training may be just based on a smaller amount of data (e.g., only with safe actions). However, it is still not clear why that is an improvement to a technology. In addition, even assuming, arguendo, the example of par 87 for usefulness may provide a technical improvement of achieving faster convergence in RL and/or less computational resources, still it is not clear how it can be applied to the safetyness in the claim. The Applicant mentioned “the specification uses them relatedly” but it is not clear how/why ‘safe/unsafe’ is related to ‘useful/useless’ in a technical manner to provide improvements for the invention.
As discussed before, in par 87, “such action restrictions” just mean not visiting the same room again based on the true/false flag in the LNN. In other words, it appears that par 87 does not deal with safetyness, but it just provides a simple example of not visiting the same room, maybe for usefulness. Thus, it is not clear how the example of par 87 for usefulness can provide a technical improvement of achieving faster convergence in RL for safetyness in the claim. Also, it is not clear how the “evaluating logical conditions …” and “comparing …” steps are used for the “selectively …” step.
In addition, pars 86-87 of the specification just describe how to avoid visiting the same location based on the logical conditions with the help of the LNN (e.g., not visiting the west room since it has already been in the west room). However, as rejected under 35 U.S.C § 101, it appears that the LNN is used just as a tool. Thus, it is not clear how the LNN operations provide e.g., improvements in computer technology and improvements to other technical fields.
Moreover, it is not clear if the specification provides a specific improvement based on the inventive concept. Even when the applicant considers “faster convergence in RL” as improvements, the applicant still may need to amend the claims to show how the RL and the LNN cooperate to provide “faster convergence in RL” in more detail, other than just as a tool in the 101 eligibility evaluation. The limitations do not clearly show e.g., improvements in computer technology and improvements to other technical fields. It does not appear that the specification and/or the claims clearly show how the inventive concept of the claims enables improvements and how they are tied together. The applicant may need to amend the claims to show how the claim languages and improvements are tied together.
For at least these reasons, Applicant's arguments are not convincing.
The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, the rejections noted below under 35 U.S.C § 101.
Claim Objections
Claim(s) 1-16 is/are objected to because of the following informalities.
Claim(s) 1 is/are objected to because of the following informalities: it appears that “a the system” (line 5) needs to read “a system” or something else. Appropriate correction is required. In addition, claim(s) 9 is/are objected to for the same reason.
Claim(s) 1, 9 each recite(s) limitations that raise issues of indefiniteness as set forth above, and its dependent claims are objected to at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 17-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim(s) 17 recite(s) the limitation “a the system” (line 7). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. In other words, it is not clear if it indicates “A computer processing system” (line 1) or something else. For the purposes of examination, “a system” is used.
Claim(s) 17 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are rejected at least based on their direct and/or indirect dependency from the claim listed above. Appropriate explanation and/or amendment is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 9-14, 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“… comprising:
…, … and running a method to perform an action safeyness logical inference, …;
…;
evaluating, …, the action safetyness logical inference based on the current state of the environment and a current action candidate from an agent;
outputting an upper bound and a lower bound on the action …, responsive to an evaluation of the action safetyness logical inference;
calculating a contradiction value for the action … by using the upper bound and the lower bound, wherein the contradiction value indicates a level of contradiction for each of a plurality of logic rules …;
evaluating logical conditions of the action … with respect to safetyness based on the contradiction value;
comparing the action … and the current state with at least one previous state for a contradiction in the logical conditions defining the action; and
selectively performing the action … responsive to an evaluation of the action … indicating that the action … is safe to perform based on the contradiction value exceeding a safetyness threshold”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“A computer-implemented”, “configuring one or more hardware processing unites as a Logical Neural Network (LNN) structure having a plurality of neurons and connective edges”, “using the LNN structure”, “implemented by the LNN structure”) – using a device and a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
In particular, the claim recites an additional element (“for safe reinforcement learning”, “the plurality of neurons and connective edges of the LNN structure in a 1-to-1 correspondence with a the system of logical formulae … wherein at least one neuron of the plurality of neurons relates to a corresponding logical connective in each formula of a system of logical formulae”, “from Reinforcement Learning (RL)”, “from RL”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
In particular, the claim recites an additional element(s) (“receiving an action of a plurality of actions from Reinforcement Learning (RL) and a current state of an environment”) – the act of receiving/obtaining data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of receiving/obtaining data is recited at a high-level of generality (i.e., as a generic act of receiving performing a generic act function of receiving data) such that it amounts no more than a mere act to apply the exception using a generic act of receiving/obtaining. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. See MPEP 2106.05(f).
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
As discussed above, the claim recites the additional element (“receiving an action of a plurality of actions from Reinforcement Learning (RL) and a current state of an environment”) at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible.
Regarding claim 2
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein the contradiction value is used as a safetyness value for the action …”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
In particular, the claim recites an additional element (“from RL”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
Regarding claim 3
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein a safetyness value for the action … is compared to a threshold such that safetyness values below the threshold are deemed safe and safetyness values equal to or greater than the threshold are deemed unsafe”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
In particular, the claim recites an additional element (“from RL”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
Regarding claim 4
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein a contradiction comprises having a higher lower bound value than an upper bound value”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 5
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“aiding a bound interpretability using a threshold of truth, α, wherein 1/2 < α < 1, such that a continuous truth value is considered True if the continuous truth value is greater than α, and False if the continuous truth value is less than 1 - α”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 6
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein a retry signal is issued and a new action is subjected to the method responsive to an evaluation of the action … indicating that the action … is unsafe to perform based on the contradiction value meeting or being below a safetyness threshold”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
In particular, the claim recites an additional element (“from RL”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
Regarding claim 9
The claim recites “A computer program product for safe reinforcement learning, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising:” to perform precisely the method of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1.
Regarding claim 10
The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 11
The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 12
The claim is rejected for the reasons set forth in the rejection of Claim 4 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 13
The claim is rejected for the reasons set forth in the rejection of Claim 5 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 14
The claim is rejected for the reasons set forth in the rejection of Claim 6 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 17
The claim recites “A computer processing system for safe reinforcement learning comprising: a memory device for storing program code; one or more hardware processing units for running the program code to: and” to perform precisely the method of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1.
Regarding claim 18
The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 19
The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 20
The claim is rejected for the reasons set forth in the rejection of Claim 4 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEHWAN KIM whose telephone number is (571)270-7409. The examiner can normally be reached Mon - Thu 7:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEHWAN KIM/Examiner, Art Unit 2129 2/7/2026