DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 6-11, 13-18, 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
At step 1, if no statutory category rejection was given above, then the claims have been determined to have a statutory category.
At step 2a, prong one, referring to claim 1, as emphasized, there is disclosed a computer implemented method for: identifying log templates from a log file, such identifying “based on” executing a first machine learning model, determining a first set of log templates not matching golden signals of a golden signal dictionary (this dictionary corresponding to metrics), filtering a plurality of log templates based on the golden signal dictionary, generating a second set of log templates corresponding to golden signals, determining “using” the first ML model a count of instances of log templates; training a second ML model based on baseline counts; detecting an anomaly by the second ML model by comparing a count to a baseline count, and retraining the second ML model from updated baseline counts. Claim 1 recited, “A computer-implemented method, comprising: identifying one or more instances of each log template of a plurality of log templates included in a log file based on execution of a first machine learning (ML) model on the log file; determining, using a golden signal dictionary, a first set of log templates, from the plurality of log templates, that does not match golden signals, wherein the golden signals correspond to a set of key metrics indicating at least one of a performance of a computing device associated with the log file, a reliability of the computing device, or a storage capacity of the computing device; filtering the plurality of log templates based on the golden signal dictionary; generating, based on the filtering of the plurality of log templates, a second set of log templates that corresponds correspond to the golden signals, respectively; determining, using the first ML model, a count of the one or more instances of each log template of the second set of log templates within the log file; training a second ML model based on baseline counts of each log template of the second set of log templates that corresponds to the golden signals, to detect an anomaly; detecting, by the trained second ML model, the anomaly within the log file, associated with the computing device, based on a comparison of the count of the one or more instances of each log template of the second set of log templates to a baseline count, from the baseline counts, of a respective log template of the second set of log templates; and retraining the second ML model based on an update of the baseline counts, wherein the update of the baseline counts is based on the count of the one or more instances.” Claims 8 and 15 are similar.
The limitations of identifying, determining, filtering, generating, determining, training, detecting, and retraining as crafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of additional elements that do not integrate the judicial exception into a practical application. That is, nothing in these claim elements as emphasized precludes the step from practically being performed in the mind, possibly with the aid pen and paper. For example, these steps perform steps of observation, evaluation, judgment, or opinion.
At step 2a, prong two, this judicial exception is not integrated into a practical application. In particular the claim additionally recites a generic computer and the use of a machine learning model.
The computer is recited at a high level of generality. The computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f).
The limitation of applying an ML model (execution of a first ML model, using the first ML model, detecting by the trained second ML model) provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The model is used to generally apply the abstract idea without placing any limits on how the model functions and does not include details about how the execution is accomplished.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception.
At step 2b, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, there are the additional elements of a generic computer and the use of a machine learning model.
The limitations regarding use of a computer amounts to no more than mere instructions to apply the exception using a generic computer component. See MPEP2106.05(d), for example TLI Communications, Flook, Alice Corp, and Versata.
The recitation of applying a model is at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f). See MPEP 2106.05(g) In re Brown and Ameranth and MPEP 2106.05(d) Flook. However, further, machine learning, including both it’s training and application, is well understood, routine, and conventional. See for example US 20200389478 A1 paragraph 66, US 20150055858 A1 paragraph 5, US 20220319658 A1 paragraph 122, US 20250209347 A1 paragraph 23, US 20230205740 A1 paragraph 42.
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept.
Further referring to claim 2, this merely further claims conventional machine learning.
Further referring to claim 3, this performs steps of observation, evaluation, judgment, or opinion.
Further referring to claim 4, this performs steps of observation, evaluation, judgment, or opinion and further claims conventional machine learning.
Further referring to claim 6, this performs steps of observation, evaluation, judgment, or opinion and further claims conventional machine learning.
Further referring to claim 7, the claim additionally claims a generic UI to present information. All uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. This does not integrate the abstract idea into practical application. Such interfaces are well understood, routine, and conventional. With respect to the generic interface, see for example US 20050010416 to Anderson (paragraph 125), US 20090150812 to Baker (paragraph 3), US 20130007655 to Bridgen (paragraph 4). Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept.
Referring to claims 9-11, 13, 14, 16-18, 20, see rejections above.
Response to Arguments
Applicant's arguments filed 23 January 2026 have been fully considered but they are not persuasive.
Regarding Applicant’s argument (page 13) asserting similarity with example 39, notably, these claims lack, for example, a similar image transform step. Examiner suggests that these claims are more similar to example 47 claim 2, which was not eligible.
Regarding Applicant’s argument (page 14) that there is a “specific and ordered” processing pipeline or that the judicial exception is applied in a “specific and structured” manner, firstly this appears to be regarding the judicial exception itself which performs steps of observation, evaluation, judgment, or opinion. To the extent this is concerning the ordering of additional limitations, the invention as a whole is practiced on a computer and as such has no specific order. Regarding the application and training of the machine learning models, these appear to be ordered arbitrarily, and retraining must occur following training.
Regarding Applicant’s argument (page 14) that anomaly detection is performed on the second set rather than all identified log templates (that templates that do not match are excluded), this is regarding steps of observation, evaluation, judgment, or opinion.
Regarding Applicant’s argument (page 16) that this is a solution to a technical problem, this is an abstract idea regarding observation, evaluation, judgment, or opinion applied regarding a technological environment. Any process improvements are to the abstract idea itself and not to the technological environment.
Regarding Applicant’s argument (page 16) that this addresses problems analyzing large volumes of data, this is not evident in the claims.
Regarding Applicant’s argument (page 16-17) again asserting similarity with example 39, one must acknowledge that similarities will exist, however, the question is more about the overall suggestions of the example. Again, the instant claims lack anything approaching the digital image transforms of example 39. Again, example 47 claim 2 would appear to be more appropriate for comparison purposes.
Regarding Applicant’s argument (page 17) that the specification shows practical implementation, technological improvement, and that there is a nexus between the claims and implementation, firstly, the technology itself (technological area or field) is not improved, but rather, at best, the abstract idea regarding that technology. Secondly, while the invention may be practically applied, it is not integrated into practical application (see rejection above).
Regarding Applicant’s argument (page 17) alleging integration into practical implementation and Desjardins, Applicant asserts that there is an improvement in computer or technology. Again, the technology is not itself improved but rather, at best, an abstract idea regarding the technology.
Regarding Applicant’s argument (page 18) that retraining enables anomaly detection that cannot be performed mentally and improves the functioning of the computing system itself, first, regarding adaptive anomaly detection, it is not clear how retraining, itself an abstract idea, enables detection that cannot be performed mentally. As presented, Applicant has claimed a series of limitations performing steps of observation, evaluation, judgment, or opinion. The training itself does not claim a particular form and is understood to be any such algorithmic improvement (adaptation).
Secondly, regarding an alleged improvement in the functioning of “the computing system itself”, examiner asks which computing system? Claim 1 is to a “computer implemented method”. Is it that computer, the one that implements Applicant’s invention (the one that is supposed to show what in the art Applicant is innovating)? Claim 1 also claims monitoring of a computing device for which an anomaly can be detected. This latter computing device is merely a source for the data the actual claimed method uses. Notably, for this latter computing device, nothing even happens so it is hard to understand how it is supposedly improved. Either way, the original point stands, where technology is not itself improved but rather, at best, an abstract idea having technology as its subject matter.
Regarding Applicant’s argument (page 18) that limitations were added that are not well-understood, routine, conventional activity, no additional additional limitations were identified.
Further, in Recentive Analytics, Inc. v Fox Corp., generic machine learning technology carrying out claimed methods is considered conventional.
Considering the focus of the disputed claims, Alice, 573 U.S. at 217, it is clear that they are directed to ineligible, abstract subject matter. Recentive has repeatedly conceded that it is not claiming machine learning itself. See Appellant’s Br. 45; Transcript at 26:14–15. Both sets of patents rely on the use of generic machine learning technology in carrying out the claimed methods for generating event schedules and network maps. See, e.g., ’367 patent, col. 6 ll. 1–5, col. 11–12; ’811 patent, col. 3, l. 23, col. 5 l. 4. The machine learning technology described in the patents is conventional, as the patents’ specifications demonstrate. See, e.g., ’367 patent, col. 6 ll. 1–5 (requiring “any suitable machine learning technology . . . such as, for example: a gradient boosted random forest, a regression, a neural network, a decision tree, a support vector machine, a Bayesian network, [or] other type of technique”); ’811 patent, col. 3 l. 23 (requiring the application of “any suitable machine learning technique.”).
And further still, iterative training and dynamic adjustments also do not represent technological improvements.
The requirements that the machine learning model be “iteratively trained” or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement. Recentive’s own representations about the nature of machine learning vitiate this argument: Iterative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning. See, e.g., Opposition Br. 9 (“[U]sing a machine learning technique[] . . . necessarily includes [an] iterative[] training step . . . .” (internal quotation marks and citation omitted)); Transcript at 26:21–24 (“[T]he way machine learning works is the inputs are defined, the model is trained, and then the algorithm is actually updated and improved over time based on the input”).
Nothing here suggests that Applicant is improving machine learning per se, but rather that generic machine learning is being employed to perform the claimed function.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL L CHU whose telephone number is (571)272-3656. The examiner can normally be reached weekdays 8 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at (571)272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GABRIEL CHU/Primary Examiner, Art Unit 2114