DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is responsive to the original application filed on 9/1/2023 . Acknowledgment is made with respect to a claim of priority to Japanese Application JP2022-202278 filed on 12/19/2022. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: REINFORCEMENT LEARNING METHOD AND DEVICE FOR CONTROLLING TARGET Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4, 10, and 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites the limitation “ wherein the first reward indicates whether selection of the first action is appropriate ” (emphasis added). It is not clear as to what exactly constitutes an “appropriate” first action to trigger a reward, as there is no ascertainable standard for an appropriate action versus some other action. Please explain. For examination purposes, the limitation will be interpreted to mean that the reward indicates whether the first action was selected . Appropriate correction is required. Claim 10 recites the limitation “ a reward indicating whether selection of the first action is appropriate ” (emphasis added). It is not clear as to what exactly constitutes an “appropriate” first action to trigger a reward, as there is no ascertainable standard for an appropriate action versus some other action. Please explain. For examination purposes, the limitation will be interpreted to mean that the reward indicates whether the first action was selected . Appropriate correction is required. Claim 11 recites the limitation “ a reward indicating whether selection of the first action is appropriate ” (emphasis added). It is not clear as to what exactly constitutes an “appropriate” first action to trigger a reward, as there is no ascertainable standard for an appropriate action versus some other action. Please explain. For examination purposes, the limitation will be interpreted to mean that the reward indicates whether the first action was selected . Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Claim 1 Step 1 : The claim recites a method ; therefore, it is directed to the statutory category of a process . Step 2A Prong 1 : The claim recites, inter alia: a second step of calculating a probability distribution indicating a distribution of a probability density or a distribution of a probability at which actions are selected, based on the current observation data and a control parameter: Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mathematical concept of calculating a probability distribution, which is performed through mathematical computation. a third step of selecting a first action among the actions based on the probability distribution : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of selecting an action based on a distribution , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper . a sixth step of calculating a probability density or a probability of the first action from the probability distribution: Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mathematical concept of calculating a probability or probability density , which is performed through mathematical computation. a seventh step of correcting the first reward based on a probability density of the first action or a probability of the first action : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of correcting a reward , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. an eighth step of updating the control parameter based on the current observation data, the first action, the next observation data, and the corrected first reward : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of updating a parameter based on particular data , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. wherein the seventh step comprises correcting the first reward such that the first reward increases as the probability density or probability decreases : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of correcting a reward , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2 : The claim does not recite any additional limitations which integrate the abstract idea into a practical application. Specifically, the additional elements consist of “ a first step of receiving current observation data ” , “ a fourth step of causing a control target to execute the first action ”, and “ a fifth step of receiving a first reward and next observation data observed after the control target has executed the first action ”. The additional element of “ a fourth step of causing a control target to execute the first action ” amount to reciting only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is not clear how the control target is broadly caused to execute an action . Thus, the additional elements amount to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer ( see MPEP § 2106.05(f)). The additional element s “ a first step of receiving current observation data ” and “ a fifth step of receiving a first reward and next observation data observed after the control target has executed the first action ” are insignificant extra-solution activities required for any uses of the abstract ideas ( see MPEP § 2106.05(g)). Thus, even when viewed individually and as an ordered combination, these additional elements do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea. Step 2B : Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional element of “ a fourth step of causing a control target to execute the first action ” amount to reciting only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is not clear how the control target is broadly caused to execute an action . Thus, the additional elements amount to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer ( see MPEP § 2106.05(f)). The additional elements “ a first step of receiving current observation data ” and “ a fifth step of receiving a first reward and next observation data observed after the control target has executed the first action ” are insignificant extra-solution activities required for any uses of the abstract ideas ( see MPEP § 2106.05(g)), and are well-understood, routine, conventional activi t ies (see MPEP § 2106.05(d)(II)(i); “Receiving or transmitting data over a network” ). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 2 Step 1 : A proces s , as above. Step 2A Prong 1 : The claim recites, inter alia: wherein the eighth step comprises updating the control parameter for each control period of the control target : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of updating a parameter based on particular data , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2, Step 2B : The claim does not recite any additional elements that are sufficient to integrate the judicial exceptions into a practical application or amount to significantly more than the judicial exception. As such, the claim is ineligible. Claim 3 Step 1 : A process, as above. Step 2A Prong 1 : The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2, Step 2B : The additional element “ wherein the second step comprises inputting the current observation data to a neural network whose input/output characteristics vary according to the control parameter, the neural network outputting the probability distribution ” is an insignificant extra-solution activity required for any uses of the mental processes ( see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity ( see MPEP § 2106.05(d)(II)(i); “ Receiving or transmitting data over a network” ). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept, integrate the abstract ideas into a practical application, or provide significantly more than the abstract ideas of the claim and thus the claim is subject-matter ineligible . Claim 4 Step 1 : A process, as above. Step 2A Prong 1 : The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2, Step 2B : The additional element “ wherein the first reward indicates whether selection of the first action is appropriate ” amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use ( see MPEP § 2106.05(h). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept, integrate the abstract ideas into a practical application, or provide significantly more than the abstract ideas of the claim and thus the claim is subject-matter ineligible . Claim 5 Step 1 : A process, as above. Step 2A Prong 1 : The claim recites, inter alia: the seventh step comprises correcting the first reward by adding a second reward to the first reward : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of correcting a reward , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2, Step 2B : The additional element “ the second reward increases as the probability density or probability decreases ” amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use ( see MPEP § 2106.05(h). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept, integrate the abstract ideas into a practical application, or provide significantly more than the abstract ideas of the claim and thus the claim is subject-matter ineligible . Claim 6 Step 1 : A process, as above. Step 2A Prong 1 : The claim recites, inter alia: the seventh step comprises correcting the first award by multiplying the first reward by a factor : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of correcting a reward , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper or is a mathematical concept. Step 2A Prong 2, Step 2B : The additional element “ the factor increases as the probability density or probability decreases ” amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use ( see MPEP § 2106.05(h). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept, integrate the abstract ideas into a practical application, or provide significantly more than the abstract ideas of the claim and thus the claim is subject-matter ineligible . Claim 7 Claim 7 recite s a learning device (step 1: a m achine ) using a processor to perform the steps of claim 1 , which by MPEP 2106.05(f) (“apply it”) cannot integrate an abstract idea into a practical application or provide significantly more than the abstract idea by itself, and is thus rejected for the same reasons set forth in the rejection of claim 1 . Claim 8 Claim 8 recite s a non-transitory computer readable storage medium (step 1: a manufacture) using a computer to perform the steps of claim 1, which by MPEP 2106.05(f) (“apply it”) cannot integrate an abstract idea into a practical application or provide significantly more than the abstract idea by itself, and is thus rejected for the same reasons set forth in the rejection of claim 1 . Claim 9 Step 1 : The claim recites a method ; therefore, it is directed to the statutory category of a process . Step 2A Prong 1 : The claim recites, inter alia: a second step of calculating a probability distribution indicating a distribution of a probability density or a distribution of a probability at which actions are selected, based on the current observation data and a control parameter updated by the learning method of claim 1 : Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mathematical concept of calculating a probability distribution, which is performed through mathematical computation. a third step of selecting a first action among the actions based on the probability distribution : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of selecting an action based on a distribution, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2 : The claim does not recite any additional limitations which integrate the abstract idea into a practical application. Specifically, the additional elements consist of “ a first step of receiving current observation data ” and “ a fourth step of causing a control target to execute the first action ”. The additional element of “ a fourth step of causing a control target to execute the first action ” amount to reciting only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is not clear how the control target is broadly caused to execute an action . Thus, the additional elements amount to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer ( see MPEP § 2106.05(f)). The additional element “ a first step of receiving current observation data ” is an insignificant extra-solution activit y required for any uses of the abstract ideas ( see MPEP § 2106.05(g)). Thus, even when viewed individually and as an ordered combination, these additional elements do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea. Step 2B : Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional element of “ a fourth step of causing a control target to execute the first action ” amount to reciting only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is not clear how the control target is broadly caused to execute an action . Thus, the additional elements amount to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer ( see MPEP § 2106.05(f)). The additional element “ a first step of receiving current observation data ” is an insignificant extra-solution activity required for any uses of the abstract ideas ( see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activities (see MPEP § 2106.05(d)(II)(i); “Receiving or transmitting data over a network”). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 10 Step 1 : The claim recites a control device ; therefore, it is directed to the statutory category of a machine . Step 2A Prong 1 : The claim recites, inter alia: calculate a probability distribution indicating a distribution of a probability density or a distribution of a probability at which actions are selected, based on the current observation data and a control parameter : Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mathematical concept of calculating a probability distribution, which is performed through mathematical computation. select a first action among the actions based on the probability distribution : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of selecting an action based on a distribution, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. the control parameter is calculated based on the current observation data and the control parameter : Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mathematical concept of calculating a control parameter , which is performed through mathematical computation. the control parameter is updated based on the probability distribution and a reward indicating whether selection of the first action is appropriate : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of updating a control parameter , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. the reward is corrected such that the reward increases as the probability density or probability decreases : Under its broadest reasonable interpretation in light of the specification, thi s limitation encompasses the mental process of correcting a reward , which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2 : The claim does not recite any additional limitations which integrate the abstract idea into a practical application. Specifically, the additional elements consist of “ recei ve current observation data ” and “ c aus e a control target to execute the first action ”. The additional element of “ c aus e a control target to execute the first action ” amount to reciting only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is not clear how the control target is broadly caused to execute an action . Thus, the additional elements amount to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer ( see MPEP § 2106.05(f)). The additional element “ recei ve current observation data ” is an insignificant extra-solution activit y required for any uses of the abstract ideas ( see MPEP § 2106.05(g)). Thus, even when viewed individually and as an ordered combination, these additional elements do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea. Step 2B : Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional element of “ c aus e a control target to execute the first action ” amount to reciting only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is not clear how the control target is broadly caused to execute an action . Thus, the additional elements amount to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer ( see MPEP § 2106.05(f)). The additional element “ recei ve current observation data ” is an insignificant extra-solution activity required for any uses of the abstract ideas ( see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activities (see MPEP § 2106.05(d)(II)(i); “Receiving or transmitting data over a network”). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 11 Claim 11 recite s a non-transitory computer readable storage medium (step 1: a manufacture) using a computer to perform the steps of claim 1 0 , which by MPEP 2106.05(f) (“apply it”) cannot integrate an abstract idea into a practical application or provide significantly more than the abstract idea by itself, and is thus rejected for the same reasons set forth in the rejection of claim 1 0 . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-11 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Haarnoja et al. (Haarnoja et al., “ Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor ”, Aug. 8, 2018, arXiv:1801.01290v2 , pp. 1-14, hereinafter “Haarnoja”). Regarding claim 1 , Haarnoja discloses [a] learning method comprising: (Abstract; “ we propose soft actor-critic, an off policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework ” ) a first step of receiving current observation data; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, wherein the current observation data is interpreted as the current state st ∈ S ) a second step of calculating a probability distribution indicating a distribution of a probability density or a distribution of a probability at which actions are selected, based on the current observation data and a control parameter; ( Page 3, §3.1; “ We will use ρπ(st) and ρπ(st,at) to denote the state and state-action marginals of the trajectory distribution induced by a policy π(at|st) ”, which discloses calculating a probability distribution as an output of the policy π(at|st) , and this distribution or control parameter is based on a distribution of actions based on the state/current observation data ) a third step of selecting a first action among the actions based on the probability distribution; ( Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ” , the selected action is “A” ; and Algorithm 1; the algorithm discloses selecting an action at “ at ∼ πφ(at|st) ” a fourth step of causing a control target to execute the first action; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, the executed action is “A” ) a fifth step of receiving a first reward and next observation data observed after the control target has executed the first action; (Page 3, §3.1; “ The environment emits a bounded reward r : S × A → [rmin, rmax] on each transition ” ; and Algorithm 1; “ D←D ∪ {(st,at,r(st,at),st+1)} ”, where st+1 is the next observation data ) a sixth step of calculating a probability density or a probability of the first action from the probability distribution; (Page 3, §3.1; “ We will use ρπ(st) and ρπ(st,at) to denote the state and state-action marginals of the trajectory distribution induced by a policy π(at|st). ” ; and Algorithm 1; “ st+1 ∼ p(st+1|st,at) ” ) a seventh step of correcting the first reward based on a probability density of the first action or a probability of the first action; and (Page 3, Equation 1; the equation corrects the reward “r(st,at)” based on a probability of the first action ρπ(st,at) , and the reward is corrected using an entropy term “H” ; and Page 11, §A; “ However, we can define the objective that is optimized under a discount factor as … This objective corresponds to maximizing the discounted expected reward and entropy for future states originating from every state-action tuple (st,at) weighted by its probability ρπ under the current policy ” ) an eighth step of updating the control parameter based on the current observation data, the first action, the next observation data, and the corrected first reward, (Page 5; “ We can approximate the gradient of Equation 12 with ˆ …. This unbiased gradient estimator extends the DDPG style policy gradients (Lillicrap et al., 2015) to any tractable stochastic policy ”, wherein the policy parameters or control parameters “phi” are updated using the gradient in equation 13, which incorporates the corrected reward ; and Algorithm 1; “ φ ←φ−λπ ˆ ∇ φJπ(φ) ” ) wherein the seventh step comprises correcting the first reward such that the first reward increases as the probability density or probability decreases (§4.1, Equations 2 and 3; the signal includes the term “ − logπ(at|st) ” which forces the reward to increase as the probability decreases ). Regarding claim 2 , the rejection of claim 1 is incorporated and Haarnoja discloses wherein the eighth step comprises updating the control parameter for each control period of the control target (Algorithm 1; “ φ ←φ−λπ ˆ ∇ φJπ(φ) ” which is performed for each gradient step or control period ). Regarding claim 3 , the rejection of claim 1 is incorporated and Haarnoja discloses wherein the second step comprises inputting the current observation data to a neural network whose input/output characteristics vary according to the control parameter, the neural network outputting the probability distribution ( Page 5 ; “ However, in our case, the target density is the Q-function, which is represented by a neural network an can be differentiated, and it is thus convenient to apply the reparameterization trick instead, resulting in a lower variance estimator. To that end, we reparameterize the policy using a neural network transformation … The method alternates between collecting experience from the environment with the current policy and updating the function approximators using the stochastic gradients from batches sampled from a replay buffer ” ). Regarding claim 4 , the rejection of claim 1 is incorporated and Haarnoja discloses wherein the first reward indicates whether selection of the first action is appropriate (Page 3, §3.2 ; Standard RL maximizes the expected sum of rewards … We will consider a more general maximum entropy objective … We can extend the objective to infinite horizon problems by introducing a discount factor γ to ensure that the sum of expected rewards and entropies is finite ” ). Regarding claim 5 , the rejection of claim 1 is incorporated and Haarnoja discloses the seventh step comprises correcting the first reward by adding a second reward to the first reward; and the second reward increases as the probability density or probability decreases (Page 3, Equation 1; the equation corrects the reward “r(st,at)” based on a probability of the first action ρπ(st,at) , and the reward is corrected using an entropy term “H” which is added to the reward ; and §4.1, Equations 2 and 3; the signal includes the term “ − logπ(at|st) ” which forces the reward to increase as the probability decreases ). Regarding claim 6 , the rejection of claim 1 is incorporated and Haarnoja discloses the seventh step comprises correcting the first award by multiplying the first reward by a factor; and the factor increases as the probability density or probability decreases (Page 3, Equation 1; the equation corrects the reward “r(st,at)” based on a probability of the first action ρπ(st,at) , and the reward is corrected using an entropy term “H” scaled by a temperature parameter α ; and §4.1, Equations 2 and 3; the signal includes the term “ − logπ(at|st) ” which forces the reward to increase as the probability decreases and Page 3, §3.1; “ The temperature parameter α determines the relative im portance of the entropy term against the reward, and thus controls the stochasticity of the optimal policy. The maxi mumentropy objective differs from the standard maximum expected reward objective used in conventional reinforce ment learning, though the conventional objective can be recovered in the limit as α → 0. For the rest of this paper, we will omit writing the temperature explicitly, as it can always be subsumed into the reward by scaling it by α−1 ” ). Regarding claim 7 , it is a device claim corresponding to the steps of claim 1, and is rejected for the same reasons as claim 1. Regarding claim 8 , it is a non-transitory computer-readable storage medium claim corresponding to the steps of claim 1, and is rejected for the same reasons as claim 1. Regarding claim 9 , Haarnoja discloses [a] control method comprising : (Abstract; “ we propose soft actor-critic, an off policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning frame work ” ) a first step of receiving current observation data; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, wherein the current observation data is interpreted as the current state st ∈ S ) a second step of calculating a probability distribution indicating a distribution of a probability density or a distribution of a probability at which actions are selected, based on the current observation data and a control parameter updated by the learning method of claim 1 ; (Page 3, §3.1; “ We will use ρπ(st) and ρπ(st,at) to denote the state and state-action marginals of the trajectory distribution induced by a policy π(at|st) ”, which discloses calculating a probability distribution as an output of the policy π(at|st) , and this distribution or control parameter is based on a distribution of actions based on the state/current observation data ; and Page 5; “ We can approximate the gradient of Equation 12 with ˆ …. This unbiased gradient estimator extends the DDPG style policy gradients (Lillicrap et al., 2015) to any tractable stochastic policy ”, wherein the policy parameters or control parameters “phi” are updated using the gradient in equation 13, which incorporates the corrected reward ; and Algorithm 1; “ φ ←φ−λπ ˆ ∇ φJπ(φ) ” )) a third step of selecting a first action among the actions based on the probability distribution; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, the selected action is “A” ; and Algorithm 1; the algorithm discloses selecting an action at “ at ∼ πφ(at|st) ” a fourth step of causing a control target to execute the first action; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, the executed action is “A” ) . Regarding claim 10 , Haarnoja discloses [a] device comprising a processor configured to : (Abstract; “ we propose soft actor-critic, an off policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning frame work ” and §5 ) receive current observation data ; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, wherein the current observation data is interpreted as the current state st ∈ S ) calculate a probability distribution indicating a distribution of a probability density or a distribution of a probability at which actions are selected, based on the current observation data and a control parameter ; (Page 3, §3.1; “ We will use ρπ(st) and ρπ(st,at) to denote the state and state-action marginals of the trajectory distribution induced by a policy π(at|st) ”, which discloses calculating a probability distribution as an output of the policy π(at|st) , and this distribution or control parameter is based on a distribution of actions based on the state/current observation data ; and Page 5; “ We can approximate the gradient of Equation 12 with ˆ …. This unbiased gradient estimator extends the DDPG style policy gradients (Lillicrap et al., 2015) to any tractable stochastic policy ”, wherein the policy parameters or control parameters “phi” are updated using the gradient in equation 13, which incorporates the corrected reward ; and Algorithm 1; “ φ ←φ−λπ ˆ ∇ φJπ(φ) ” )) select a first action among the actions based on the probability distribution ; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, the selected action is “A” ; and Algorithm 1; the algorithm discloses selecting an action at “ at ∼ πφ(at|st) ” cause a control target to execute the first action, wherein ; (Page 3, §3.1; “ We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,p,r), where the state space S and the action space A are continuous, and the unknown state transition probability p : S × S × A → [0, ∞) represents the probability density of the next state st+1 ∈ S given the current state st ∈ S and action at ∈ A ”, the executed action is “A” ). the control parameter is calculated based on the current observation data and the control parameter, (Page 5; “ We can approximate the gradient of Equation 12 with ˆ …. This unbiased gradient estimator extends the DDPG style policy gradients (Lillicrap et al., 2015) to any tractable stochastic policy ”, wherein the policy parameters or control parameters “phi” are updated using the gradient in equation 13, which incorporates the corrected reward ; and Algorithm 1; “ φ ←φ−λπ ˆ ∇ φJπ(φ) ” ) the control parameter is updated based on the probability distribution and a reward indicating whether selection of the first action is appropriate, and (Page 5; “ We can approximate the gradient of Equation 12 with ˆ …. This unbiased gradient estimator extends the DDPG style policy gradients (Lillicrap et al., 2015) to any tractable stochastic policy ”, wherein the policy parameters or control parameters “phi” are updated using the gradient in equation 13, which incorporates the corrected reward ; and Algorithm 1; “ φ ←φ−λπ ˆ ∇ φJπ(φ) ” ; and Page 3, §3.2; Standard RL maximizes the expected sum of rewards … We will consider a more general maximum entropy objective … We can extend the objective to infinite horizon problems by introducing a discount factor γ to ensure that the sum of expected rewards and entropies is finite ” ) the reward is corrected such that the reward increases as the probability density or probability decrease (Page 3, Equation 1; the equation corrects the reward “r(st,at)” based on a probability of the first action ρπ(st,at) , and the reward is corrected using an entropy term “H” ; and Page 11, §A; “ However, we can define the objective that is optimized under a discount factor as … This objective corresponds to maximizing the discounted expected reward and entropy for future states originating from every state-action tuple (st,at) weighted by its probability ρπ under the current policy” ; and §4.1, Equations 2 and 3; the signal includes the term “− logπ(at|st)” which forces the reward to increase as the probability decreases ). Regarding claim 11 , it is a non-transitory computer-readable storage medium claim corresponding to the steps of claim 1 0 , and is rejected for the same reasons as claim 1 0 . Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhu et al. ( WO2023225941A1 ). Wang et al. (US12190223 B2). Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Enter examiner's name" \* MERGEFORMAT Brent Hoover whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (303)297-4403 . The examiner can normally be reached FILLIN "Work schedule?" \* MERGEFORMAT Monday - Friday 9-5 MST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Abdullah Kawsar can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT 571-270-3169 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRENT JOHNSTON HOOVER/ Primary Examiner, Art Unit 2127