Prosecution Insights
Last updated: April 19, 2026
Application No. 17/910,475

Learning Data Generation Device, Learning Device, Control Device, Learning Data Generation Method, Learning Method, Control Method, Learning Data Generation Program, Learning Program, and Control Program

Final Rejection §101§103§112
Filed
Sep 09, 2022
Examiner
KNIGHT, PAUL M
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Omron Corporation
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
169 granted / 272 resolved
+7.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
35.2%
-4.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 272 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Style In this action unitalicized bold is used for claim language, while italicized bold is used for emphasis. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The amended title filed 12/19/2025 is noted. Attempting to use language that describes some aspect of what Applicant believes to be their invention may be more helpful than merely repeating “tree” and “assimilation” next to various statutory categories. Election by Original Presentation All claims will be examined despite inclusion of an inventive concept which lacks unity of invention with the originally filed claims. However, Examiner notes that Applicant’s asserted inventive concept was not in the originally filed claims. Throughout the Remarks directed to the rejection under 35 U.S.C. § 101, Applicant repeatedly asserts a technical improvement based on “setting the range of the explanatory variable.” See e.g. Rem. 9. Further, the amended claims add new claims 11-20 directed to various ways of setting this range. Nothing in the original claim set limits the range of explanatory variables. The Office generally does not permit shift. Since the originally filed claims failed to contribute anything to the state of the art, the original claims lack unity of invention with the amended claims including with new claims 11-20. Therefore, this set of amendments meets the requirements for denial of entry based on election by original presentation. “When claims are presented which the examiner finds are drawn to an invention other than the one elected, he or she should treat the claims as outlined in MPEP § 821.03.” MPEP § 819. “Claims added by amendment following action by the examiner, as explained in MPEP § 818.02(a), and drawn to an invention other than the one previously claimed, should be treated as indicated in 37 CFR 1.145.” MPEP § 821.03. “If, after an office action on an application, the applicant presents claims directed to an invention distinct from and independent of the invention previously claimed, the applicant will be required to restrict the claims to the invention previously claimed if the amendment is entered[.]” 37 C.F.R. § 1.145. The claims filed 12/19/2025 are entered and fully examined with the understanding that claims modifying the range of explanatory variables will be examined. However, in the future, claims drawn to yet another inventive concept lacking unity of invention with the originally filed claims will not be examined. So, if for instance, if the claims are amended to recite using regression to find artificial data points that most closely fit the data, and using those artificial data points as input to the teacher model for creating a data set that could be used to train the (student) tree, that would be directed to a different inventive concept which would be unlikely to have unity of invention with the original claims. This is pointed out so that Applicant has the complete information when deciding which type, if any, continuation to file (e.g. RCE/regular Continuation/CIP). Applicant Reply “The claims may be amended by canceling particular claims, by presenting new claims, or by rewriting particular claims as indicated in 37 CFR 1.121(c). The requirements of 37 CFR 1.111(b) must be complied with by pointing out the specific distinctions believed to render the claims patentable over the references in presenting arguments in support of new claims and amendments. . . . The prompt development of a clear issue requires that the replies of the applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. . . . An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” MPEP § 714.02. Generic statements or listing of numerous paragraphs do not “specifically point out the support for” claim amendments. “With respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) (citing MPEP § 2163.04 which provides that a ‘simple statement such as ‘applicant has not pointed out where the new (or amended) claim is supported, nor does there appear to be a written description of the claim limitation ‘___’ in the application as filed’ may be sufficient where the claim is a new or amended claim, the support for the limitation is not apparent, and applicant has not pointed out where the limitation is supported.’)” MPEP § 2163(II)(A). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) and the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? All claims are found to be directed to one of the four statutory categories, unless otherwise indicated in this action. Step 2A Prongs One and Two (Alice Step 1): According to Office guidance, claims that read on math do not recite an abstract idea at step 2A1, when the claims fail to refer to the math by name.1 The MPEP also equates “recit[ing] a judicial exception” with “state[ing]” or “describ[ing]” an abstract idea in the claims.2 Consistent with this guidance, an abstract idea may be first recited in a dependent claim even though the independent claims read on that abstract idea. Claim limitations which recite any of the abstract idea groupings set forth in the manual are found to be directed, as a whole, to an abstract idea unless otherwise indicated.3 The claims do not recite additional elements that integrate the abstract ideas into a practical application.4 To confer patent eligibility to an otherwise abstract idea, claims may recite a specific means or method of solving a specific problem in a technological field.5 Independent Claims Claim 1. A learning data generating device comprising: a processor; and a non-transitory memory storing instructions, which when executed by the processor, cause the processor to function as: (This merely recites an instruction to apply the abstract ideas recited below using generic computer components.) an acquiring section acquiring a plurality of pieces of observed data observed from an object of control, each piece of observed data including an explanatory variable and an objective variable; (This reads on data input, which is merely extra-solution activity. Implementation on generic computing components is merely an instruction to apply an exception on a computer. Limiting to “observed data from an object of control” merely limits the field of use to a particular data environment. The judicial exceptions are discussed below.) a teacher learning section that, on the basis of the plurality of pieces of observed data acquired by the acquiring section, trains a model for outputting the objective variable from the explanatory variable, and generates a learned teacher model; (Generating a generic model for outputting data based on acquired data reads on a mental process.) and a learning data generating section that, by setting a range of the explanatory variable, selecting, from the observed data, explanatory variables within the range as predetermined explanatory variables, (Selecting data with a given range reads on a mental process.) and inputting the predetermined explanatory variables to the teacher model generated by the teacher learning section, (Inputting data into a model reads on mathematical operations. Note that “inputting” in machine learning refers to matrix/vector operations, not inputting data into a device.) acquires objective variables as predetermined objective variables with respect to the predetermined explanatory variables, thereby generating a plurality of learning data, each piece of which includes one of the predetermined explanatory variables and a corresponding one of the predetermined objective variables, (This reads on the model mapping inputs to outputs. This reads on math. Note that this also reads on a mental process.) for training a decision tree model. (This reads on manipulation of data that can be performed mentally.) For rejections of claims 5 and 8, see rejection of claim 1. Claim 8 additionally recites the use of software to implement the operations of claim 1. This is merely an instruction to apply the judicial exception on a computer. Step 2B (Alice Step 2): The rejected claims do not recite additional elements that amount to significantly more than the judicial exception. All additional limitations that do not integrate the claimed judicial exception into a practical application also fail to amount to significantly more, for the reasons given at step 2A2. All limitations found to be extra-solution activity at step 2A2 are found to be WURC, including limitations that read on mere data gathering, data storage, and data input/output/transfer. The independent claims substantially recite “an acquiring section acquiring a plurality of observed data[.]” This is WURC because the language reads on generic data input. This finding is based on cases which have recognized that generic input-output operations, repetitive processing operations, and storage operations are WURC.6 Other aspects of generic computing have also been found to be WURC.7 Further, the description itself may provide support for a finding that claim elements are WURC. The analysis under § 112(a) as to whether a claim element is “so well-known that it need not be described in detail in the patent specification” is the same as the analysis as to whether the claim element is widely prevalent or in common use.8 Similarly, generic descriptions in the Specification of claimed components and features has been found to support a conclusion that the claimed components were conventional.9 Improvements to the relevant technology may support a finding that the claims include a patent eligible inventive concept. But some mechanism that results in any asserted improvements must be recited in the claim, and the Specification must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing the improvement.10 This applies to the dependent claims below. Dependent claims: Claim 2. The learning data generating device of Claim 1, wherein the object of control is a production device. (This merely limits to a field of use.) Claim 3. A learning device comprising: a processor; and a memory storing instructions for causing the processor to function as: (See rejection of claim 1.) a learning data acquiring section that acquires the learning data generated by the learning data generating device of Claim 1; and a learning section that trains the decision tree model on the basis of the learning data acquired by the learning data acquiring section. (This reads on implementation of a mental process on generic computer components.) Claim 4. (original): A control device comprising: : a processor; and a memory storing instructions for causing the processor to function as: (See rejection of claim 1.) an information acquiring section that acquires the explanatory variable from the object of control; (This reads on data input, which is merely extra solution activity.) and a control section that, by inputting the explanatory variable acquired by the information acquiring section to the decision tree model learned by the learning device of Claim 3, acquires an objective variable corresponding to the explanatory variable, and carries out control corresponding to the objective variable on the object of control. (This reads on making a decision using a decision tree (i.e. which objective variable corresponds to an explanatory variable.) Making a decision is a mental process. Implementation on a generic computer is a mere instruction to apply the exception on a computer. Carrying out a generic “control” is a mere instruction to apply the abstract mental process.) For rejections of claims 6 and 9, see rejection of claim 3. For rejections of claims 7 and 10, see rejection of claim 4. Claim 11. The learning data generating device of Claim 1, wherein the range is set between an upper limit and a lower limit of the explanatory variable in the plurality of pieces of observed data. (Limiting the range of data to a model reads on a mental process.) Claim 12. The learning data generating device of Claim 1, wherein the range is set in a region in which a density of the plurality of pieces of observed data is equal to or greater than a threshold. (Limiting the range of data to a model reads on a mental process.) Claim 13. The learning data generating device of Claim 1, wherein a total number of pieces of learning data generated by the learning data generating section is less than a total number of pieces of observed data acquired by the acquiring section. (Limiting the range of data to a model reads on a mental process.) Claim 14. The learning data generating device of Claim 1, wherein an upper limit and a lower limit of the range of the explanatory variable are respectively set lower than a maximum value and greater than a minimum value among values of the explanatory variable in the plurality of pieces of observed data. (Limiting the range of data to a model reads on a mental process.) For rejections of claims 15-17 and 18-20 see rejections of claims 12-14, respectively. All dependent claims are rejected as containing the material of the claims from which they depend. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Generally: separately listed claim elements are construed as distinct components, all claim terms must be given weight, and there is presumed to be a difference in meaning and scope when different words or phrases are used in separate claims. Since different term or phrases are presumed to differ in scope and each term or phrase in the claims must find clear support in the description, a description of a single element in the Specification may fail to support multiple claim terms. “[C]laims must ‘conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description.’ 37 C.F.R. § 1.75(d)(1).” Phillips v. AWH Corp., 415 F.3d 1303, 1316 (Fed. Cir. 2005) (as cited in MPEP § 2111). Further, a lack of lack of detail in the Specification describing how a claimed result is achieved can support a finding that the Applicant was not in possession of the claimed invention at the time of filing, notwithstanding verbatim support. “It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See, e.g., Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683, 114 USPQ2d 1349, 1356, 1357 (Fed. Cir. 2015) (reversing and remanding the district court’s grant of summary judgment of invalidity for lack of adequate written description where there were genuine issues of material fact regarding "whether the specification show[ed] possession by the inventor of how accessing disparate databases is achieved"). If the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention a rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, for lack of written description must be made.” MPEP § 2161.01(I). “An original claim may lack written description support when (1) the claim defines the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved[.] See Ariad Pharms., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1349-50 (Fed. Cir. 2010) (en banc). The written description requirement is not necessarily met when the claim language appears in ipsis verbis in the specification. ‘Even if a claim is supported by the specification, the language of the specification, to the extent possible, must describe the claimed invention so that one skilled in the art can recognize what is claimed. The appearance of mere indistinct words in a specification or a claim, even an original claim, does not necessarily satisfy that requirement.’” MPEP § 2163.03. All independent claims substantially recite “a learning data generating section that, by setting a range of the explanatory variable, selecting, from the observed data, explanatory variables within the range as predetermined explanatory variables, and inputting the predetermined explanatory variables to the learned teacher model generated by the teacher learning section, acquires objective variables as predetermined objective variables with respect to the predetermined explanatory variables thereby generating a plurality of pieces of learning data, each piece of which includes one of the predetermined explanatory variables and a corresponding one of the predetermined objective variables, for training a decision tree model.” The Specification does not teach “generating a plurality of pieces of learning data” “by setting a range of the explanatory variable.” Specifically, the Specification only describes setting the range of explanatory variables to include the entire data set. Selection of a range including all of the data points does not contribute to generating pieces of learning data. Figure 7 of the Specification shows the range of variables being bounded by vertical dashed lines. See Fig. 7 item 4-2. But all of the original data points shown in 4-1 of Figure 7 already fall within the dashed lines, so the setting of the range does not eliminate data points. Compare Fig. 7 item 4-1 with Fig. 7 item 4-1, each showing the same 13 circles in the same locations. Figure 7 indicates the “number of learning data is decided upon” in 4-4 of Fig. 7, selecting 3 points from the original set of 13 points, but the only selection pattern appears to be selection of points near line M. Importantly, nothing in the figure or accompanying description shows elimination of data points resulting from setting the range of the explanatory variable. The drawing shows all of the original data points to remain within the selected range between the dotted lines of 4-2 in Figure 7. Setting the range to include all the data points does not operate to “thereby generat[e] a plurality of pieces of learning data[.]” Therefore, the claimed relationship between “setting a range of the explanatory variable” and “generating a plurality of pieces of learning data” is not supported by the original specification. See also Spec. ¶¶45-50, describing selection of three points within the boundary, but omitting any description of the boundary location operating to eliminate or otherwise change the data points that are ultimately selected. All independent claims substantially recite “an acquiring section acquiring a plurality of pieces of observed data observed from an object of control, each piece of observed data including an explanatory variable and objective variable[.]” The Specification explains explanatory variables as observed data, such as revolutions of a motor detected by sensors. Spec. ¶37. This is consistent with “observed data from an object of control.” But the Specification explains “objective variables” as “predicted values of states and the like of the production device 5 that are inferred with respect to the inputted explanatory variables.” Spec. ¶37. This description is inconsistent with the objective variables being included within “observed data observed from an object of control,” as claimed. Since the description in the Specification is inconsistent with the claim language with respect to the meaning of “objective variables” the specification does not provide adequate support for the claim language. See also Spec. ¶48 (emphasis added) (“Next, in 4-4 of Fig. 7, the three explanatory variables Xi, X2, X3 are inputted to the teacher model M, and objective variables Yi, Y2, Y3 are outputted from the teacher model M.”) All dependent claims are rejected as containing the limitations of the claims from which they depend. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Generally: separately listed claim elements are construed as distinct components, that all claim terms must be given weight, there is presumed to be a difference in meaning and scope when different words or phrases are used in separate claims, and repeated and consistent descriptions in the specification indicate the proper scope of a claimed term. “[C]laims must ‘conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description.’ 37 C.F.R. § 1.75(d)(1).” Phillips v. AWH Corp., 415 F.3d 1303, 1316 (Fed. Cir. 2005) (as cited in MPEP § 2111). Therefore, use of two different terms in the claims that both rely on the description of a single structure in the Specification may render at least one term indefinite because there is no way to determine which term should be construed in view of the description of the single structure. All independent claims substantially recite “an acquiring section acquiring a plurality of pieces of observed data observed from an object of control, each piece of observed data including an explanatory variable and objective variable[.]” The Specification explains explanatory variables as observed data, such as revolutions of a motor detected by sensors. Spec. ¶37. This is consistent with “observed data from an object of control.” But the Specification explains “objective variables” as “predicted values of states and the like of the production device 5 that are inferred with respect to the inputted explanatory variables.” Spec. ¶37. This description is inconsistent with the objective variables being “observed data observed from an object of control,” as claimed. Since the description in the Specification is inconsistent with the claim language with respect to the meaning of “objective variables” there are two inconsistent ways to construe the claims. This renders the claims indefinite. See also Spec. ¶48 (emphasis added) (“Next, in 4-4 of Fig. 7, the three explanatory variables Xi, X2, X3 are inputted to the teacher model M, and objective variables Yi, Y2, Y3 are outputted from the teacher model M.”) All independent claims substantially recite “a learning data generating section that, by setting a range of the explanatory variable, selecting, from the observed data, explanatory variables within the range as predetermined explanatory variables, and inputting the predetermined explanatory variables to the learned teacher model generated by the teacher learning section, acquires objective variables as predetermined objective variables with respect to the predetermined explanatory variables[.]” The language “acquires objective variables as predetermined objective variables could be interpreted as receiving objective variables from sensors (i.e. acquiring . . . from an object of control” as recited earlier in the claim) or could be interpreted as outputting objective variables, consistent with the claim language “trains a model for outputting the objective variable from the explanatory variable.” Further, the language “acquires objective variables as predetermined objective variables” could be read as acquiring predetermined objective variables as an output in response to predetermined explanatory variables being input to the teacher model or merely renaming “objective variables as predetermined objective variables.” In any case, the language “acquires objective variables as objective variables” claims an indeterminate relationship between the two claim elements that cannot be reconciled because the plain meaning of the claim language indicates that two different claim elements are acquired, while the inventive concept, as best understood is more consistent with a teacher model outputting a predetermined objective variable in response to input of a predetermined explanatory variable. All independent claims substantially recite “generating a plurality of pieces of learning data, each piece of which includes one of the predetermined explanatory variables and a corresponding one of the predetermined objective variables[.]” As best understood, the claims limit the range of the “explanatory variables” input to the teacher model. The inputs within the selected range are called “predetermined explanatory variables” in the claims. When the predetermined explanatory variables are input to the teacher model, the corresponding subset of outputs are called “predetermined objective variables.” The “explanatory variables” are included within “pieces of observed data” and by implication, the “predetermined explanatory variables” are a subset within the observed data, as claimed. This is inconsistent with reciting “a learning data generating section” “generating . . . predetermined explanatory variables.” Either the explanatory variables are a subset “pieces of observed data” selected from the observed data (“selecting, from the observed data, explanatory variables within the range as predetermined explanatory variables”) OR they are “generated by the teacher learning section,” but not both. While the claims could also be interpreted to equate “generating” with “selecting”, so construing the terms would be incompatible with the plain meaning of each term. They are different terms with different meanings. Further, this is merely one more possible way of interpreting the claims, which is similarly unreasonable to the first two possible interpretations. As there is no clear way of interpreting the claim language which is both internally consistent (i.e. where the usage within the claims and spec are consistent with themselves and with one another) and consistent with the plain meanings of the terms used in the claims (i.e. where the meaning of generate and select are consistent with common English usage), the claim language is indefinite. All dependent claims are rejected as including the material of the claim from which they depend. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-3, 5-6, 8-9, and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tanimoto (US 2020/0005183) and Ochiai (US 2017/0315960) and Garcia (A review on outlier/anomaly detection in time series data, Feb 2020). Claim 1. (original): A learning data generating device, comprising: (See Tanimoto Figs. 15 and 18.) a processor; and a non-transitory memory storing instructions, which when executed by the processor, cause the processor to function as: (“Each of the processes in evaluation methods and training methods which are shown below is performed through the execution of a program by a processor. [0039] The program mentioned above can be stored using various types of non-transitory computer readable media and supplied to a computer. The non-transitory computer readable media include various types of tangible storage media.” Tanimoto ¶38-39.) an acquiring section acquiring a plurality of pieces of observed data observed from an object of control, each piece of observed data including an explanatory variable and an objective variable; (The previously cited art does not expressly teach data observed from an object of control. Ochiai teaches “The acquisition unit 102 accepts factor analysis data and stores, among the factor analysis data, time-series data of an objective variable representing a result of an event as objective-variable time-series data in a storage unit (not illustrated). In addition, the acquisition unit 102 stores time-series data of an explanatory variable representing a factor of an event as explanatory-variable time-series data in the storage unit.” Ochiai ¶39. “[0040] The explanatory variable may use, for example, data representing an operating condition of a system, such as an adjustment value, a temperature, a pressure, a gas flow rate, and a voltage of an apparatus. The objective variable may use, for example, data representing an evaluation index, such as quality or yield of a product. The time-series data indicates data arranged in order of time at a predetermined time interval. . . . [0041] Note that the factor analysis data may be measurement data measured by a measuring instrument, and may be log data generated by an arbitrary system.” Ochiai ¶¶40-41. “The learning method may be any learning method available for classification. For example, L1 regularized logistic regression, a decision tree, non-linear regression, or similar approaches thereof may be used.” Ochiai ¶¶47. Modifying the invention taught in Tanimoto based on the teaching of Ochaia would have been obvious to one of ordinary skill in the art before the effective filing date because generating a model using the method of Tanimoto based on the types of data used for training in Ochiai would result in a smaller model that could improve industrial process controls.) a teacher learning section that, on the basis of the plurality of pieces of observed data acquired by the acquiring section, trains a model for outputting the objective variable from the explanatory variable, and generates a learned teacher model; (This reads on the model M1. See e.g. Figs. 15 and 18. “The present training method also corresponds to supervised learning using newly obtained data instead of the training data used in the construction of the teacher model. Note that, in the construction of the teacher model, semi-supervised learning may also be implemented. This is obvious from FIGS. 15 and 16.” Tanimoto ¶212.) and a learning data generating section that, by setting a range of the explanatory variable, selecting, from the observed data, explanatory variables within the range as predetermined explanatory variables, (This limitation reads on the closest supporting description in the Specification. The Specification shows selecting a range which includes all of the explanatory values. Compare Fig. 7 items 4-1 and 4-2 showing the vertical lines used to limit the range of the explanatory variable placed outside of all 13 data points so that all 13 data points are selected within the set range. Based on this description, the limitation above is interpreted to read setting the range of explanatory variables to include any and all variables, or any subset of explanatory variables. Note that some boundary outside the values of a given data set is inherent and selecting a given data set implicitly selects the boundary outside of the data set. Tanimoto teaches “Specifically, a pre-trained exemplar model (referred to as the first learning model M1) the behavior of which is intended to be maintained is assumed to be a teacher model. On the other hand, a student learning model as the target of training which allows the behavior to be maintained as much as possible is assumed to be a training target model (referred to as the second learning model M2). It is assumed that, as supervised training data, labeled data is given. Training data 101 as labeled data is input to the first learning model M1. Then, the output out_o(t) from first learning model M1 is adjusted (corrected) using a correct answer label. The adjusted output out_o′(t) is used to train the second learning model M2.” Tanimoto ¶208. While it is not required based on the explanation of claim interpretation above, additional art will be cited as teaching the unclaimed but implied aspect of elimination of data points beyond a range for explanatory variables. Note that no additional prior art is required to reject this claim, based on the interpretation in the rejection of claim 1. Ochiai teaches removing objective variables beyond a given range. “The objective-variable criterion values may be set to a range of arbitrary objective-variable criterion values for which a factor of an event is desired to know. The range may be a range between the minimum value and the maximum value that the objective-variable time-series data can take, or may be a part of the range. For the part of the range, for example, some kind of a criterion such as “a range of ⅕ to ⅘” or a statistical amount such as within some % is used. The range of the objective-variable criterion values is determined in the factor analysis apparatus 101, or by an external apparatus.” Ochiai ¶44. But this fails to teach a range of the observed (explanatory) variable. Garcia teaches: “The most popular and intuitive definition for the concept of point outlier is a point that significantly deviates from its expected value. Therefore, given a univariate time series, a point at time t can be declared an outlier if the distance to its expected value is higher than a predefined threshold τ: |xt − ˆxt | > τ (1) where xt is the observed data point, and ˆxt is its expected value.” Garcia P. 6. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Garcia because point outliers in observed data may correlate with bad data (i.e. incorrect measurements). Further, elimination of outliers may reduce the amount of data may reduce the need for computing resources.) and inputting the predetermined explanatory variables to the teacher model generated by the teacher learning section, acquires objective variables as predetermined objective variables with respect to the predetermined explanatory variables, (See Tanimoto Fig. 15 showing training data 101 input to the trained teacher model (M1) which outputs an answer. The answer from M1 is adjusted. The claimed “objective variable” reads on both the raw output and on the adjusted output. “Training data 101 as labeled data is input to the first learning model M1. Then, the output out_o(t) from first learning model M1 is adjusted (corrected) using a correct answer label. The adjusted output out_o′(t) is used to train the second learning model M2.” Tanimoto ¶208.) thereby generating a plurality of pieces of learning data, each piece of which includes one of the predetermined explanatory variables and a corresponding one of the predetermined objective variables, (See Figs 15 and 18 showing training data 101 input to model M2 while the corresponding output (or adjusted output) of M1 is used to train the model M2. See also tanimoto ¶208 cited above.) for training a decision tree model. (“When the second learning model M2 is another leaning model such as a decision tree, a loss function is calculated using out_o′(t) and out_s(t). Then, on the basis of the loss function, the second learning model M2 may be trained appropriately.” Tanimoto ¶211. Claim 2. The learning data generating device of Claim 1, wherein the object of control is a production device. (See rejection of claim 1.) Claim 3. A learning device comprising: a processor; and a memory storing instructions for causing the processor to function as: (See rejection of claim 1.) a learning data acquiring section that acquires the learning data generated by the learning data generating device of Claim 1; (See rejection of claim 1. “The training method according to the present embodiment also corresponds to supervised learning using newly obtained data instead of the training data used in the construction of the pre-trained teacher model. Note that, in the construction of the teacher model, semi-supervised learning may also be implemented. Additionally, the first input stage of the student learning model may also include a mechanism of extracting attributes such as mean, median, dispersion, discrete cosine transform, and HOG.” Tanimoto ¶255.) and a learning section that trains the decision tree model on the basis of the learning data acquired by the learning data acquiring section. (The Target Training Model (M2) reads on “a learning section.”) For rejections of claims 5 and 8, see rejection of claim 1. For rejections of claims 6 and 9, see rejection of claim 3. Claim 11. (new): The learning data generating device of Claim 1, wherein the range is set between an upper limit and a lower limit of the explanatory variable in the plurality of pieces of observed data. (See rejection of claim 1, including claim interpretation. Note that no additional prior art is required to reject this claim, based on the interpretation in the rejection of claim 1. In the interest of compact prosecution, note also that Ochiai teaches “The objective-variable criterion values may be set to a range of arbitrary objective-variable criterion values for which a factor of an event is desired to know. The range may be a range between the minimum value and the maximum value that the objective-variable time-series data can take, or may be a part of the range. For the part of the range, for example, some kind of a criterion such as “a range of ⅕ to ⅘” or a statistical amount such as within some % is used. The range of the objective-variable criterion values is determined in the factor analysis apparatus 101, or by an external apparatus.” Ochiai ¶44. But this fails to teach a range of the observed (explanatory) variable. Garcia teaches: “The most popular and intuitive definition for the concept of point outlier is a point that significantly deviates from its expected value. Therefore, given a univariate time series, a point at time t can be declared an outlier if the distance to its expected value is higher than a predefined threshold τ: |xt − ˆxt | > τ (1) where xt is the observed data point, and ˆxt is its expected value.” Garcia P. 6. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Garcia because outliers often correlate with bad data (i.e. incorrect measurements, and reducing the amount of data may reduce the need for computing resources.)) Claim 12. (new): The learning data generating device of Claim 1, wherein the range is set in a region in which a density of the plurality of pieces of observed data is equal to or greater than a threshold. (See rejection of claim 1. Note also that figure 7 shows selection of the range to include all data points. The range including all data points has a value greater than 0. Areas outside that range have a value of 0. A range set in a region where observed data is equal or greater than a threshold reads on selecting data where the density is greater than 0. This reads on merely using all data. One of ordinary skill in the art would understand the prior art’s teaching to include merely using all data. Therefore, no additional prior art is required to reject this claim. In the interest of compact prosecution, note also that Ochiai teaches “The objective-variable criterion values may be set to a range of arbitrary objective-variable criterion values for which a factor of an event is desired to know. The range may be a range between the minimum value and the maximum value that the objective-variable time-series data can take, or may be a part of the range. For the part of the range, for example, some kind of a criterion such as “a range of ⅕ to ⅘” or a statistical amount such as within some % is used. The range of the objective-variable criterion values is determined in the factor analysis apparatus 101, or by an external apparatus.” Ochiai ¶44. But this fails to teach a range of the observed (explanatory) variable. Garcia teaches: “All of these techniques are based on equation (1). However, not all the existing point outlier detection methods rely on that idea, such as the density-based methods, which belong to the second category of methods depicted in Fig. 7. Techniques within this group consider that points with less than τ neighbors are outliers; that is, when less than τ objects lie within distance R from those points. This could be denoted as xt is an outlier ⇐⇒ |{x ∈ X|d(x,xt) ≤ R}| < τ (2) where d is most commonly the Euclidean distance, xt is the data point at time stamp t to be analyzed, X is the set of data points, and R ∈ R+. Thus, a point is an outlier if τp +τs < τ, where τp and τs are the number of preceding and succeeding neighbors (points that appear before and after xt) at distance lower or equal than R, respectively.” Garcia P. 6. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Garcia because outliers often correlate with bad data (i.e. incorrect measurements, and reducing the amount of data may reduce the need for computing resources.)) Claim 13. (new): The learning data generating device of Claim 1, wherein a total number of pieces of learning data generated by the learning data generating section is less than a total number of pieces of observed data acquired by the acquiring section. (See rejection of claim 1, including claim interpretation. Note that no additional prior art is required to reject this claim, based on the interpretation in the rejection of claim 1. In the interest of compact prosecution, note also that Ochiai teaches “The objective-variable criterion values may be set to a range of arbitrary objective-variable criterion values for which a factor of an event is desired to know. The range may be a range between the minimum value and the maximum value that the objective-variable time-series data can take, or may be a part of the range. For the part of the range, for example, some kind of a criterion such as “a range of ⅕ to ⅘” or a statistical amount such as within some % is used. The range of the objective-variable criterion values is determined in the factor analysis apparatus 101, or by an external apparatus.” Ochiai ¶44. But this fails to teach a range of the observed (explanatory) variable. Garcia teaches: “The most popular and intuitive definition for the concept of point outlier is a point that significantly deviates from its expected value. Therefore, given a univariate time series, a point at time t can be declared an outlier if the distance to its expected value is higher than a predefined threshold τ: |xt − ˆxt | > τ (1) where xt is the observed data point, and ˆxt is its expected value.” Garcia P. 6. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Garcia because outliers often correlate with bad data (i.e. incorrect measurements, and reducing the amount of data may reduce the need for computing resources.)) Claim 14. (new): The learning data generating device of Claim 1, wherein an upper limit and a lower limit of the range of the explanatory variable are respectively set lower than a maximum value and greater than a minimum value among values of the explanatory variable in the plurality of pieces of observed data. (See rejection of claim 1, including claim interpretation. Note that no additional prior art is required to reject this claim, based on the interpretation in the rejection of claim 1. In the interest of compact prosecution, note also that Ochiai teaches “The objective-variable criterion values may be set to a range of arbitrary objective-variable criterion values for which a factor of an event is desired to know. The range may be a range between the minimum value and the maximum value that the objective-variable time-series data can take, or may be a part of the range. For the part of the range, for example, some kind of a criterion such as “a range of ⅕ to ⅘” or a statistical amount such as within some % is used. The range of the objective-variable criterion values is determined in the factor analysis apparatus 101, or by an external apparatus.” Ochiai ¶44. But this fails to teach a range of the observed (explanatory) variable. Garcia teaches: “The most popular and intuitive definition for the concept of point outlier is a point that significantly deviates from its expected value. Therefore, given a univariate time series, a point at time t can be declared an outlier if the distance to its expected value is higher than a predefined threshold τ: |xt − ˆxt | > τ (1) where xt is the observed data point, and ˆxt is its expected value.” Garcia P. 6. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Garcia because outliers often correlate with bad data (i.e. incorrect measurements, and reducing the amount of data may reduce the need for computing resources.)) For rejections of claims 15-17 and 18-20 see rejections of claims 12-14, respectively. Claim 4, 7, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tanimoto, Ochiai, Garcia, and Matania (Process Control Decision Inference, Monitoring, and Execution, 2019) Claim 4. (original): A control device comprising: a processor; and a memory storing instructions for causing the processor to function as: (See rejection of claim 1.) an information acquiring section that acquires the explanatory variable from the object of control; (See rejection of claim 1.) and a control section that, by inputting the explanatory variable acquired by the information acquiring section to the decision tree model learned by the learning device of Claim 3, acquires an objective variable corresponding to the explanatory variable, and (The art cited in the rejection of claim 1 teaches a model in the form of a decision tree. Tanimoto teaches a model that takes explanatory variables as inputs and outputs a corresponding objective variable. “FIG. 15 is a diagram describing an example of obtaining, by learning, a classification model representing a relationship between an objective variable that represents a result of an event and an explanatory variable that represents a factor of an event. As illustrated in FIG. 15, a learning device learns by using explanatory variables (X.sub.1, X.sub.2, . . . , X.sub.n) (n is a natural number) and an objective variable (boundary condition: Y≧4, criterion values Y={1, 2, 3, 4, 5}) as inputs. Thus, a classification model representing a relationship between the objective variable (Y) and the explanatory variables (any of X.sub.1 to X.sub.n) is generated. In FIG. 15, the objective variable (Y) uses a boundary condition such as product quality. A boundary condition, i.e. Y≧4, means that a criterion value of allowable quality is “4” or more among criterion values “1” to “5” of predetermined quality. The explanatory variables (X.sub.1, X.sub.2, . . . , X.sub.n) are assigned with values relating to manufacture of a product, such as a heating temperature and a heating time, for example.” Tanimoto ¶13.) carries out control corresponding to the objective variable on the object of control. (Ochiai teaches “The output unit 105 may output an explanatory variable name in influence degree order, and may output an explanatory variable name influencing a part or all of a series of process. The influence degree order is, for example, descending order of value of an influence degree. In addition, the order is not limited to the influence degree order, but may be order of the explanatory variable name, order of arrangement of explanatory variables, or order of leading time of time-series data included in the explanatory variables. The explanatory variable name is an identification name assigned for each explanatory variable and is represented as, for example, motor rotation speed.” Ochiai ¶53. This teaches a reason to carry out control corresponding to the objective variable, but does not expressly state that control is carried out in response to the objective variable. Matania teaches “[Decision trees] predict values of a single target variable by applying decision rules to a set of input variables that influence the prediction. Like many other machine learning approaches, decision trees are reasonably good at reliably inferring decision boundaries from properly labeled data. Unlike other machine learning techniques such as artificial neural networks, decision trees capture and present the inferred decision logic in a form that is understandable by humans. . . . Once a decision tree has been formulated (e.g. inferred from data), the same decision tree can be used operationally to identify undesirable control decisions and/or, as the authors have been researching, automate validated control system actions based on monitored plant parameters and states.” Matania P. 1. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Matania because decision trees automate human tasks in a process control setting in a way that humans can understand and therefor monitor.) For rejections of claims 7 and 10, see rejection of claim 4 Response to Arguments Applicant's arguments filed 12/19/2025 have been fully considered but they are not persuasive. Rejections under § 101 The remarks state that the claims do not include subject matter which be practically performed in the human mind. Claim language is quoted in the remarks, but the Remarks fail to articulate any rationale in support of Applicant’s position. The Remarks further assert that the claims are analogous to those of claim 4 in Example 46 of the 2019 SME guidance, which recited math, but were directed to “a specific application integrated into a technical system.” Rem. 9. Notably absent from the remarks is any “specific application” to which the claims are directed. The Remarks take the position that the claimed subject matter is directed a “technical improvement by facilitating a reduction in an amount of learning data for training a decision tree model.” Rem. 9. But Applicant fails to articulate any explanation in support of this assertion, citing to paragraphs 8 and 89 of the original specification without further explanation. The amended claim language is cited, but again, nothing in the remarks explains how the claimed operations result in the asserted technical improvement. The remarks offer similarly unsupported assertions in the arguments under Step 2B (Alice step 2), with the additional assertion of a technique for “reducing noises from the learning data.” Rem. 10. Again, the Specification is cited without explaining how the claimed operations or techniques result in the purported improvement. It is submitted that an explanation which ties claimed operations or techniques to a technical solution may advance prosecution. Conversely, the failure of the remarks to articulate any connection between claimed operations and the asserted technical solutions is consistent claims directed to an abstract idea. Rejections under § 103 Applicant states that the previously cited art fails to teach selecting explanatory variables based on “the range.” Rem. 11. Based on the description in Figure 7, selection based on “the range” includes all data points. It is not clear how this should limit the claim scope in a way which would overcome the selection of data points in the prior art. See rejection above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL M KNIGHT whose telephone number is (571) 272-8646. The examiner can normally be reached Monday - Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL M. KNIGHT /PAUL M KNIGHT/Examiner, Art Unit 2148 1 This distinction between claims which read on math and claims which recite an abstract idea is based on official USPTO Guidance. The 2019 Subject Matter Eligibility (SME) Examples instructs examiners that a claim reciting “training the neural network” where the background describes training as “using stochastic learning with backpropagation which is a type of machine learning algorithm that uses the gradient of a mathematical loss function to adjust the weights of the network” “does not recite any mathematical relationships, formulas, or calculations.” See 2019 SME Example 39, PP. 8-9 (emphasis added). In this example, the plain meaning of “training the neural network” read in light of the disclosure reads on backpropagation using the gradient of a mathematical loss function. See MPEP § 2111.01. In contrast, the 2024 SME Examples instructs examiners that a claim reciting “training, by the computer, the ANN . . . wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm” does recite an abstract idea because “[t]he plain meaning of [backpropagation algorithm and gradient descent algorithm] are optimization algorithms, which compute neural network parameters using a series of mathematical calculations.” 2024 PEG Example 47, PP. 4-6. The Memorandum of August 4, 2025; Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101, P. 3 also directs examiners that “training the neural network” recited in Example 39 merely “involve[s] . . . mathematical concepts” and contrasts claim 2 of example 47 as “referring to [specific] mathematical calculations by name[.]” (Emphasis added.) 2 “For instance, the claims in Diehr . . . clearly stated a mathematical equation . . . and the claims in Mayo . . . clearly stated laws of nature . . . such that the claims ‘set forth’ an identifiable judicial exception. Alternatively, the claims in Alice Corp. . . . described the concept of intermediated settlement without ever explicitly using the words ‘intermediated’ or ‘settlement.’” MPEP § 2106.04(II)(A). 3 “By grouping the abstract ideas, the examiners’ focus has been shifted from relying on individual cases to generally applying the wide body of case law spanning all technologies and claim types. . . . If the identified limitation(s) falls within at least one of the groupings of abstract ideas, it is reasonable to conclude that the claim recites an abstract idea in Step 2A Prong One.” MPEP § 2106.04(a). See also MPEP 2104(a)(2). 4 Step 2A prongs one and two are evaluated individually, consistent with the framework in the MPEP. Evaluation of relationships between abstract ideas and additional elements in one location promotes clarity of the record. 5 “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. . . . It should be noted that while this consideration is often referred to in an abbreviated manner as the ‘improvements consideration,’ the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” MPEP 2106.04(d)(1). See also Koninklijke KPN N.V. v. Gemalto M2M GmbH, 942 F.3d 1143, 1150-1152 (Fed. Cir. 2019). 6 See MPEP § 2106.05(d)(II) listing operations including “receiving or transmitting data,” “storing and retrieving data in memory,” and “performing repetitive calculations” as WURC. 7 “But ‘[f]or the role of a computer in a computer-implemented invention to be deemed meaningful in the context of this analysis, it must involve more than performance of 'well-understood, routine, [and] conventional activities previously known to the industry.’ Content Extraction, 776 F.3d at 1347-48 (quoting Alice, 134 S. Ct at 2359). Here, the server simply receives data, ‘extract[s] classification information . . . from the received data,’ and ‘stor[es] the digital images . . . taking into consideration the classification information.’ See ‘295 patent, col. 10 ll. 1-17 (Claim 17). . . . These steps fall squarely within our precedent finding generic computer components insufficient to add an inventive concept to an otherwise abstract idea. Alice, 134 S. Ct. at 2360 (‘Nearly every computer will include a 'communications controller' and a 'data storage unit' capable of performing the basic calculation, storage, and transmission functions required by the method claims.’); Content Extraction, 776 F.3d at 1345, 1348 (‘storing information’ into memory, and using a computer to ‘translate the shapes on a physical page into typeface characters,’ insufficient confer patent eligibility); Mortg. Grader, 811 F.3d at 1324-25 (generic computer components such as an ‘interface,’ ‘network,’ and ‘database,’ fail to satisfy the inventive concept requirement); Intellectual Ventures I, 792 F.3d at 1368 (a ‘database’ and ‘a communication medium’ ‘are all generic computer elements’); BuySAFE v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) (‘That a computer receives and sends the information over a network—with no further specification—is not even arguably inventive.’).” TLI Commc'ns LLC v. AV Auto., LLC, 823 F.3d 607, 614 (Fed. Cir. 2016), Emphasis Added. 8 “The analysis as to whether an element (or combination of elements) is widely prevalent or in common use is the same as the analysis under 35 U.S.C. 112(a) as to whether an element is so well-known that it need not be described in detail in the patent specification. See Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (supporting the position that amplification was well-understood, routine, conventional for purposes of subject matter eligibility by observing that the patentee expressly argued during prosecution of the application that amplification was a technique readily practiced by those skilled in the art to overcome the rejection of the claim under 35 U.S.C. 112, first paragraph)[.]” MPEP § 2106.05(d)(I). 9 “Similarly, claim elements or combinations of claim elements that are routine, conventional or well-understood cannot transform the claims. (Citing BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1290-1291 (Fed. Cir. 2018)). When the patent's specification ‘describes the components and features listed in the claims generically,’ it ‘support[s] the conclusion that these components and features are conventional.’ Weisner v. Google LLC, 51 F.4th 1073, 1083-84 (Fed. Cir. 2022); see also Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1357-58 (Fed. Cir. 2024).” Broadband iTV, Inc. v. Amazon.com, Inc., 113 F.4th 1359 (Fed. Cir. 2024) 10 “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.” MPEP § 2106.05(a).
Read full office action

Prosecution Timeline

Sep 09, 2022
Application Filed
Aug 15, 2025
Non-Final Rejection — §101, §103, §112
Dec 19, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530592
NON-LINEAR LATENT FILTER TECHNIQUES FOR IMAGE EDITING
2y 5m to grant Granted Jan 20, 2026
Patent 12530612
METHODS FOR ALLOCATING LOGICAL QUBITS OF A QUANTUM ALGORITHM IN A QUANTUM PROCESSOR
2y 5m to grant Granted Jan 20, 2026
Patent 12499348
READ THRESHOLD PREDICTION IN MEMORY DEVICES USING DEEP NEURAL NETWORKS
2y 5m to grant Granted Dec 16, 2025
Patent 12462201
DYNAMICALLY OPTIMIZING DECISION TREE INFERENCES
2y 5m to grant Granted Nov 04, 2025
Patent 12456057
METHODS FOR BUILDING A DEEP LATENT FEATURE EXTRACTOR FOR INDUSTRIAL SENSOR DATA
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+17.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 272 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month