Prosecution Insights
Last updated: April 19, 2026
Application No. 17/073,525

Cognitive Error Recommendation Based on Log Data

Non-Final OA §101§103
Filed
Oct 19, 2020
Examiner
TRIEU, EM N
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
5 (Non-Final)
48%
Grant Probability
Moderate
5-6
OA Rounds
3y 10m
To Grant
53%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
30 granted / 63 resolved
-7.4% vs TC avg
Minimal +5% lift
Without
With
+5.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 63 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to the claims filed on 08/25/2025. Claims 1-22 are presented for examination. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/20/2025 has been entered. Response to Argument In reference to applicant’s argument regrading rejections under 35 U.S.C. § 101: Applicant’s argument : The Claims Recite An Improvement In Machine Learning Models For Issue Recommendations with Error Prioritization Based on Log Data. The present claims recite a method and system generating issue recommendations with error prioritization based on log data. Embodiments include a "first ML model" and a "second ML model". The amended claims include limitations that recite the improvements in each of the models. Specifically, for the first ML model, the claims recite "the first ML model is generated by forming a first issues matrix comprising features, logged errors and corresponding error frequency data and transforming the first issues matrix into two or more sparse matrices comprising weights of how each error relates to each feature." Further, for the second ML model, the claims recite "the second ML model comprises a similarity metric and/or a kernel." Further, the amended claims integrate the present invention into a practical application based on the recitation of how the various elements (i.e., first and second ML models, collaborative pipeline, content pipeline, etc.) interact. In conjunction with the August 4, 2025 USPTO issued Memorandum entitled "Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101" ("August 2025 Memo"), Examiners are to consider the claim as a whole when evaluating integration into a practical application pursuant to Step 2A Prong 2: Instead, the analysis should take into consideration all the claim limitations and how these limitations interact and impact each other when evaluating whether the exception is integrated into a practical application. While an additional limitation (or combination) that merely applies the judicial exception on a generic computer may not render a claim eligible on its own, an additional limitation (or combination) that meaningfully limits the judicial exception can render it eligible. Examiner’s Response: Examiner respectfully disagrees to applicant’s argument since the claim amendment is not integrated into the practical application as the claim recite the mental process, “transforming the first issues matrix into two or more sparse matrices comprising weights of how each error relates to each feature.", the human mind can transform one matrix to the other matric to determine how each error relates to each feature, for example ,the this particular IP address is most anomalous than other IP address. The additional claim limitation : "the first ML model is generated by forming a first issues matrix comprising features, logged errors and corresponding error frequency data”, “the second ML model comprises a similarity metric and/or a kernel." This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. Therefore, the claim limitations are not integrated into the practical application and the claim does not recite the improvement of the machine learning model or the improvement of the functioning of the computer in the technology field. Applicant’s Argument: The Specification Explains the Improvements Recited in the Claims The Federal Circuit has assessed whether an improvement is described in a patentee's Specification when determining the subject matter eligibility of a claim. See e.g., Enfish LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016); Bascom Global Internet Services, Inc. v. AT&T Mobility LLC, 827 F.3d 1341 (Fed. Cir. 2016). As disclosed at [0014]-[0015] of Applicant's specification, cloud systems experience complex issues: A cloud service provider can implement a number of different system architectures, such as for different end users or customers. For example, system implementations can include combinations of cloud services, combinations of platform configurations, or other suitable combinations of components that present a system architecture. In some embodiments, the systems include different combinations of components that present unique heterogeneous system architectures. Accordingly, a given cloud service provider with a number of customers/systems can result in a large number of heterogenous environments with complex layers of components, where a variety of errors, problems, inefficiencies, or general issues can be encountered. Developing an understanding of which issues are impactful across different customers/systems (or for specific customers/systems) can enable resources to be focused and prioritized (at times even before the errors are actually reported). See specification, at [0015]-[0016], explains technological improvements achieved by the functionality of amended claim 1 and the generated issue recommendations: In some embodiments, the software and hardware the implement the plurality of systems generate log data. For example, each system (with a given system architecture) can generate log data over time. In addition, while the systems are executing software, errors, inefficiencies, or general issues can be encountered that are reflected in the log data. In some embodiments, this log data is processed to generate a data set for machine learning. The generated data set can include issue labels, or labels for a sequence of log data/entries that represent an issue encountered when implementing the systems. In addition, features can be extracted from the data set that reflect characteristics of the issue labels. In some embodiments, issue recommendations can be generated using machine learning algorithms based on the extracted features and the generated data set. For example, collaborative based machine learning filtering and content based machine learning filtering can be used to generate a hybrid recommendation of issues. In some embodiments, the hybrid recommendation of issues can represent errors that impact across different ones of the system architectures and/or errors that are impactful to the systems. Id. at [0034], further explains how cloud systems can benefit from the issue recommendations: Embodiments of the issue recommendation service include a number of benefits: e Customer Satisfaction: Issues which have not been directly reported by end users/customers can be assessed by a product development/support team ahead of time. Based on a priority provided by the recommendation service, these unreported issues can be mitigated. Accordingly, going forward, end users/customers can report fewer issues which could impact operations. " Product Improvement: Issues that appear frequently and which are similar in nature, as prioritized by the recommendation service, can aid preventive solutions. For example, issues which are possible bugs (being un-noticed during testing & development phase) can be mitigated, which helps in improving the relevant product over a period of time. " Personalization: Recommendations are often received from end users/customers as part of feedback because they are a ripe source of issues/errors. For this reason, end users/customers are good at recommending issues, and recommendation systems often try to model this behavior. Embodiments of the recommendation service use the data accumulated indirectly to improve the product's overall services and ensure that they are suitable according to an end user/customer preference. Amended claim 1 recites that "a system update based on one or more of the issue recommendations is used to mitigate at least one system bug or system error at a first cloud system, the first cloud system comprising one of the plurality of cloud systems or a new cloud system." Accordingly, amended claim 1 explicitly improves the claimed first cloud system via the recited system update, and thus a) improves the functioning of the cloud system and b) improves cloud system technology. Examiner’s Response: Examiner respectfully disagrees to applicant’s argument regrading the 101 rejection based on the paragraphs. 0016, 0017, 0034, 0035 in the specification, because the claim does not recite limitations and/or an improvement that reflect to these paragraph. Therefore, the applicant’s argument is not persuasive, the rejection is still maintained. In reference to applicant’s argument regrading rejections under 35 U.S.C. § 103: Examiner’s Response: Reconsideration of these rejections is respectfully requested because the prior art fails to disclose generating prioritized issue recommendations from log data using a first ml model generated using sparsed matrices and a second ml model using a similarity metric and/or kernel. Embodiments generate issue recommendations with error prioritization based on log data. See specification at [0016]. Embodiments ingest the log data to generate an event stream for a plurality of cloud systems, where each of the plurality of cloud systems comprises a combination of components, and the plurality of cloud systems. Id. at [0043]. Embodiments process the generated log data event streams to generate a data set, where the data set comprises issue labels for issues experienced by the plurality of cloud systems. Id. at [0052]. Examiner’s Response: Examiner respectfully disagrees to applicant’s argument since Muddu further teach amended claim limitations, as Muddu teaches “A method for generating machine learning recommendations issue recommendations with error prioritization using based on log data,” (Muddu, [Par.0348-0349], “As mentioned above, the security platform 300 detects anomalies in event data, and further detects threats based on detected anomalies. In some embodiments, the security platform also defines and detects an additional type of indicator of potential security breach, called threat indicators. Threat indicators are an intermediary level of potential security breach indicator defined within a hierarchy of security breach indicators that includes anomalies at the bottom level, threat indicators as an intermediate level, and threats at the top level.[0349] FIG. 23 is flow diagram illustrating at a high level, a processing hierarchy 2300 of detecting anomalies, identifying threat indicators, and identifying threats with the security platform 300. Reducing false positives in identifying security threats to the network is one goal of the security platform. To this end, flow diagram describes an overall process 2300 by which large amounts of incoming event data 2302 are processed to detect anomalies…” Examiner’s note, Muddu teaches the anomaly detection is based on event data, wherein, the anomaly detection is defined in difference levels such as anomalies at the bottom level, threat indicators as an intermediate level, and threats at the top level, that are corresponding to the error prioritization), wherein the first ML model is generated by forming a first issues matrix comprising features, logged errors and corresponding error frequency data (Muddu, [Par.0316-0317], “FIG. 21 is a flow diagram illustrating a method 2100 to execute a model deliberation process thread, in accordance with various embodiments. A computation worker executes the model deliberation process thread. In some embodiments, the computation worker execute multiple model training process threads associated with a single model type. In some embodiments, the computation worker execute multiple model-specific process threads associated with a single model type. In some embodiments, the computation worker execute multiple model-specific process threads associated with different model types. At step 2102, the model deliberation process thread processes the most recent time slice from the group-specific data stream to compute a score associated with the most recent time slice. The most recent time slice can correspond to an event or a sequence of event observed at the target computer network. In some embodiments, the group-specific data stream used by the model deliberation process thread is also used by a corresponding model training process thread for the same entity. That is, the model training process thread can train a model state of an entity-specific machine learning model by processing a previous time slice of the group-specific data stream. The model execution engine 1808 can initiate the model deliberation process thread based on the model state while the model training process thread continues to create new versions (e.g., new model states). In some embodiments, the model deliberation process thread can reconfigure to an updated model state without pausing or restarting.[0317] At step 2104, the model deliberation process thread generates a security-related conclusion based on the score. The security-related conclusion can identify the event or the sequence of events corresponding to the time slice as a security-related anomaly, threat indicator or threat. In one example, the model deliberation process compares the score against a constant threshold and makes the security-related conclusion based on the comparison. In another example, the model deliberation process compares the score against a dynamically updated baseline (e.g., statistical baseline) and makes the security-related conclusion based on the comparison.” Examiner’s note, the machine learning model generates the score based on the event data to determine whether the entity is associated with thread indicator or thread. The anomaly is classified based on the login error and unusual activity time that corresponding to the logged errors and error frequency data, as it can be seen at [Par.0447], “In one aspect of the techniques introduced here, the event data is analyzed, via various machine learning techniques as disclosed herein, to identify anomalies from expected or authorized network activity or behavior. An “anomaly” in the context of this description is a detected fact, i.e., it is objective information, whereas a “threat” (discussed further below) is an interpretation or conclusion that is based on one or more detected anomalies. Anomalies can be classified into various types. As examples, anomalies can be alarms, blacklisted applications/domains/IP addresses, domain name anomalies, excessive uploads or downloads, website attacks, land speed violations, machine generated beacons, login errors, multiple outgoing connections, unusual activity time/sequence/file access/network activity, etc. Anomalies typically occur at a particular date and time and involve one or more participants, which can include both users and devices.” and transforming the first issues matrix into two or more sparse matrices comprising weights of how each error relates to each feature (Muddu, [Par.0360], “Process 2500 continues at step 2506 with assigning an anomaly score based on the processing of the event data 2302 through the anomaly model. Calculation of the anomaly score is done by the processing logic contained within the anomaly model and represents a quantification of a degree to which the processed event data is associated with anomalous activity on the network. In some embodiments, the anomaly score is a value in a specified range. For example, the resulting anomaly score may be a value between 0 and 10, with 0 being the least anomalous and 10 being the most anomalous.” Examiner’s note, the machine learning generates the anomaly score to determine anomaly level of the entity based on comparison of the score, therefore, the anomaly score levels are considered as the spare matrices that the comprise the weight range from 0-10 ). Therefore, the applicant’s argument is not persuasive, the rejection is still maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-11 and 13-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 analysis: In the instant case, the claims are directed to a method (claims 1, 3-11, 21, 22), system (claims 13-19) and non-transitory computer readable medium (claim 20). Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Step 2A analysis: Based on the claims being determined to be within of the four categories (Step 1), it must be determined if the claims are directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), in this case the claims fall within the judicial exception of an abstract idea. Specifically, the abstract idea of “Mental Processes/Concepts performed in the human mind (including an observation, evaluation, judgment, opinion)”. The claim 1 recites: Step 2A: prong 1 analysis: -“ ingesting log data to generate an event stream for” this is a mental process, the human mind can use the log data to generate the event stream of the particular system, (Observation/Evaluation). -“ processing the generated event streams to generate a data set” this is a mental process, the human mind can use the generated event stream data to generate the dataset, (Observation/Evaluation). -“ extracting features from the generated data set;” this is a mental process, the human mind can extract the feature from the particular dataset, (Observation/Evaluation). -“ generating, based on the extracted features and the generated data set, issue recommendations” this is a mental process, the human mind can provide issue recommendation based on the extracted features and the generated dataset, (Observation/evaluation). -“ transforming the first issues matrix into two or more sparse matrices comprising weights of how each error relates to each feature” this is a mental process, the human mind can transform one matrix to the other matric to determine how each error relates to each feature, for example ,the this particular IP address is most anomalous than other IP address. -“ that analyzes features of the heterogenous system architectures to generate collaborative recommendations,” this is a mental process, the human can analyze the feature of the system architecture to generate the collaborative recommendations (Observation/Evaluation). -“to analyze features of the issues experienced by the plurality of cloud systems to generate content recommendations” this is a mental process, the human mind can analyze the issues experienced by the plurality of cloud system to generate the recommendation, (observation/Evaluation). -“the collaborative recommendations and the content recommendations are combined to generate the issue recommendations” this is a mental process, the human can generate the issue recommendations based on collaborative recommendation and content recommendation, (Observation/Evaluation). a) Step 2A: Prong 2 analysis: -“ wherein each of the plurality of cloud systems comprises a combination of components, and the plurality of cloud systems present heterogenous system architectures that comprise different mixes of the components”, “wherein the first ML model is generated by forming a first issues matrix comprising features, logged errors and corresponding error frequency data” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. -“wherein the data set comprises issue labels for issues experienced by the plurality of cloud systems”, , features of the heterogenous system architectures”, “the first model comprises a first data structure and a second data structure that are trained using a)_iterative factorization and b) an issue data structure, the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure,”, “the first cloud system comprising one of the plurality of cloud systems or a new cloud system.”, “wherein the second ML model comprises a similarity metric and/or a kernel;”, “wherein the generated issue recommendations comprise issue rankings based on a weighted score” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. -“ generating issue recommendations with error prioritization using based on log data”, “a plurality of cloud systems”, “cloud system”, “using a hybrid plurality of machine learning models that comprise a collaborative pipeline and a content pipeline”, “the collaborative pipeline comprises a first machine learning model”, “the content pipeline comprises a second ML model of the machine learning models” These limitations are recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). -“and a system update based on one or more of the issue recommendations is used to mitigate at least one system bug or system error at a first cloud system” this claim limitation recite This additional element is recited at a high level of generality such that the claim recites only the idea of a solution or outcome (system updated based on the issues recommendation). The claim fails to recite details of how the solution or outcome is accomplished and covers any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). b) Step 2B analysis: --“ wherein each of the plurality of cloud systems comprises a combination of components, and the plurality of cloud systems present heterogenous system architectures that comprise different mixes of the components”, “wherein the first ML model is generated by forming a first issues matrixcomprising features, logged errors and corresponding error frequency data” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. -“wherein the data set comprises issue labels for issues experienced by the plurality of cloud systems”, , features of the heterogenous system architectures”, “the first model comprises a first data structure and a second data structure that are trained using a)_iterative factorization and b) an issue data structure, the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure,”, “the first cloud system comprising one of the plurality of cloud systems or a new cloud system.”, “wherein the second ML model comprises a similarity metric and/or a kernel;”, “wherein the generated issue recommendations comprise issue rankings based on a weighted score” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. --“ generating issue recommendations with error prioritization using based on log data”, “a plurality of cloud systems”, “cloud system”, “using a hybrid plurality of machine learning models that comprise a collaborative pipeline and a content pipeline”, “the collaborative pipeline comprises a first machine learning model”, “the content pipeline comprises a second ML model of the machine learning models” These limitations are recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). -“and a system update based on one or more of the issue recommendations is used to mitigate at least one system bug or system error at a first cloud system” this claim limitation recite This additional element is recited at a high level of generality such that the claim recites only the idea of a solution or outcome (system updated based on the issues recommendation). The claim fails to recite details of how the solution or outcome is accomplished and covers any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). The claim 3 recites: a) Step 2A: Prong 2 analysis: -“ wherein at least a portion of the heterogenous system architectures comprise independent cloud systems that are hosted in different cloud environments for different cloud customers.” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. b) Step 2B analysis: -“ wherein at least a portion of the heterogenous system architectures comprise independent cloud systems that are hosted in different cloud environments for different cloud customers.” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. The claim 4 recites: Step 2A: prong 1 analysis: -“ wherein each issue label is defined based on a distinct sequence of log data from the event streams, the distinct sequences being representative of the issue labels.” This is a mental process, the human mind can define the issue label based on the distinct sequence of the log data from the event stream, (Observation/Evaluation). Step 2A: Prong 2 analysis and Step 2B analysis: No additional element that Integrates the judicial exception into a practical application or amount to significantly more than the abstract idea. The claim 5 recites: Step 2A: prong 1 analysis: -“ processing the generated event streams to generate the data set comprises encoding the log data from the event streams with the module identifiers” this is a mental process, the human mind can process the generated event stream to generate the dataset comprising the encoded log data with the module IDs, (Observation/Evaluation). --“ wherein module identifiers associated with the components that comprise the plurality of cloud systems are determined from the log data” this is a mental process, the human mind can determine the module IDs associated with the component that comprise the plurality of cloud systems, (Observation/Evaluation). Step 2A: Prong 2 analysis and Step 2B analysis: No additional element that Integrates the judicial exception into a practical application or amount to significantly more than the abstract idea. The claim 6 recites: Step 2A: prong 1 analysis: -“ wherein the distinct sequences of log data are defined using distinct sequences of module identifiers” this is a mental process, the human mind can define the difference sequence of log data based on the difference sequence of module IDs, (Observation/Evaluation). Step 2A: Prong 2 analysis and Step 2B analysis: No additional element that Integrates the judicial exception into a practical application or amount to significantly more than the abstract idea. The claim 7 recites: a) Step 2A: Prong 2 analysis: -“ The collaborative recommendations comprise a first recommendation score for the issue labels in the data set, the first recommendation score being based on issue embeddings defined in a shared latent space. PNG media_image1.png 187 626 media_image1.png Greyscale ” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. b) Step 2B analysis: -“ The collaborative recommendations comprise a first recommendation score for the issue labels in the data set, the first recommendation score being based on issue embeddings defined in a shared latent space. PNG media_image1.png 187 626 media_image1.png Greyscale ” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. The claim 8 recites: Step 2A: prong 1 analysis: -“ maps at least portion of the heterogeneous system architectures to a weight set and at least portion of the issue labels to the weight set defined by the first and second data structures.” This is a mental process, the human can map the portion of the particular system to weight set and portion of the issue labels to the weight set, (Observation/evaluation). -“ wherein the shared latent space comprises a latent space that maps at least portion of the heterogeneous system architectures to a weight set and at least portion of the issue labels to the weight set.” This is a mental process; the human mind can map the portion of the particular system architecture to a weight set and at least portion of the issue labels to the weight set (Observation/Evaluation). Step 2A: Prong 2 analysis and Step 2B analysis: No additional element that Integrates the judicial exception into a practical application or amount to significantly more than the abstract idea. The claim 9 recites: a) Step 2A: Prong 2 analysis: -“ wherein, the content recommendation comprise a second recommendation score for the issue labels in the data set, the second recommendation score being based on a similarity between issue parameters for at least two issue labels” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. b) Step 2B analysis: -“ wherein, the content recommendation comprise a second recommendation score for the issue labels in the data set, the second recommendation score being based on a similarity between issue parameters for at least two issue labels” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. The claim 10 recites: a) Step 2A: Prong 2 analysis: -“wherein the issue parameters comprise a stack trace for one or more errors related to the at least two issue labels” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application. b) Step 2B analysis: -“wherein the issue parameters comprise a stack trace for one or more errors related to the at least two issue labels” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. The claim 11 recites: Step 2A: prong 1 analysis: -“ the issue recommendations are generated using a combination of the first recommendation score and the second recommendation score and the combination comprises a weighted average of the first recommendation score and the second recommendation score.” this is a mental process, the human mind can generate the issue recommendation by using the combination of first and second recommendation score, (Observation/Evaluation). Step 2A: Prong 2 analysis and Step 2B analysis: No additional element that Integrates the judicial exception into a practical application or amount to significantly more than the abstract idea. The claim 13 is rejected for the same reason as the claim 1, since these claims recite the same limitation. The claim 14 is rejected for the same reason as the claims 2 and 3 since these claims recite the same limitation. The claim 15 is rejected for the same reason as the claim 4, since these claims recite the same limitation. The claim 16 is rejected for the same reason as the claims 5 and 6, since these claims recite the same limitation. The claim 17 is rejected for the same reason as the claims 7 and 8, since these claims recite the same limitation. The claim 18 is rejected for the same reason as the claims 9 and 10, since these claims recites the same limitation. The claim 19 is rejected for the same reason as the claim 11, since these claims recites the same limitation. The claim 20 is rejected for the same reason as the claim 1, since these claims recite the same limitation. The claim 21 recites: a) Step 2A: Prong 1 analysis: -“ analyzing at least a stack trace for one or more errors related to the issues experienced” this is a mental process, the human mind can analyze the stack trace for one or more error related to the issues experienced, (Observation/Evaluation). a) Step 2A: Prong 2 analysis: -“ the second model, generates the content recommendations” These limitations are recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). b) Step 2B analysis: -“ the second model, generates the content recommendations” These limitations are recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). The claim 22 recites: a) Step 2A: Prong 2 analysis: “wherein the system update based on the one or more issue recommendations maintains a product or service to anticipate and preempt the at least one system bug or system error” this claim limitation recite This additional element is recited at a high level of generality such that the claim recites only the idea of a solution or outcome (system updated based on the issues recommendation). The claim fails to recite details of how the solution or outcome is accomplished and covers any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). b) Step 2B analysis: “wherein the system update based on the one or more issue recommendations maintains a product or service to anticipate and preempt the at least one system bug or system error” this claim limitation recite This additional element is recited at a high level of generality such that the claim recites only the idea of a solution or outcome (system updated based on the issues recommendation). The claim fails to recite details of how the solution or outcome is accomplished and covers any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result. Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 4, 13, 14, 15, 20, 21, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Muddu et al. (Pub. No US20190387007– hereinafter, Muddu) in view of Cai et al (Pub. No. 20200005196 -hereinafter, Cai) and further in view of Song et al. (Pub. No. 20070118498 -hereinafter, song) and further in view of Andoni et al. (Pub. No US 20190228312– hereinafter, Andoni). Regarding claim 1, Muddu teaches a method for generating machine learning recommendations issue recommendations with error prioritization using based on log data, the method comprising (Muddu, [Abstract], “A security platform employs a variety techniques and mechanisms to detect security related anomalies and threats in a computer network environment. The security platform is “big data” driven and employs machine learning to perform security analytics. The security platform performs user/entity behavioral analytics (UEBA) to detect the security related anomalies and threats, regardless of whether such anomalies/threats were previously known. The security platform can include both real-time and batch paths/modes for detecting anomalies and threats.” And [Par.0135], “In the following description, the example of a security platform is used, for illustrative purposes only, to explain various techniques that can be implemented by the data processing system. Note, however, that the techniques introduced here are not limited in applicability to security applications, security information and event management (SIEM) applications, or to any other particular kind of application. For example, at least some of the techniques introduced here can be used for automated fraud detection and other purposes, based on machine data. Additionally, the techniques introduced here are not limited to use with security-related anomaly and threat detection; rather, the techniques can be employed with essentially any suitable behavioral analysis (e.g., fraud detection or environmental monitoring) based on machine data. In general, “machine data” can include performance data, diagnostic information and/or any of various other types of data indicative of performance or operation of equipment (e.g., an action such as upload, delete, or log-in) in a computing system, as described further below.” And Par.0348-0349], “As mentioned above, the security platform 300 detects anomalies in event data, and further detects threats based on detected anomalies. In some embodiments, the security platform also defines and detects an additional type of indicator of potential security breach, called threat indicators. Threat indicators are an intermediary level of potential security breach indicator defined within a hierarchy of security breach indicators that includes anomalies at the bottom level, threat indicators as an intermediate level, and threats at the top level.[0349] FIG. 23 is flow diagram illustrating at a high level, a processing hierarchy 2300 of detecting anomalies, identifying threat indicators, and identifying threats with the security platform 300. Reducing false positives in identifying security threats to the network is one goal of the security platform. To this end, flow diagram describes an overall process 2300 by which large amounts of incoming event data 2302 are processed to detect anomalies…”) Examiner’s note, anomaly detection is based on machine data (event data or login data), wherein, the anomaly detection is defined in difference levels such as anomalies at the bottom level, threat indicators as an intermediate level, and threats at the top level, that are corresponding to the error prioritization. the machine data is considered as the log data, because the machine data includes performance data, diagnostic information and/or any of various other types of data indicative of performance or operation of equipment (e.g., an action such as upload, delete, or log-in) in a computing system.): Ingesting the log data to generate an event stream for a plurality of cloud systems (Muddu, [Par.0141, 0146-0147], “[Par.0141], The security platform can be deployed at any of various locations in a network environment. In the case of a private network (e.g., a corporate intranet), at least part of the security platform can be implemented at a strategic location (e.g., a router or a gateway coupled to an administrator's computer console) that can monitor and/or control the network traffic within the private intranet. In the case of cloud-based application where an organization may rely on Internet-based computer servers for data storage and data processing, at least part of the security platform can be implemented at, for example, the cloud-based servers. Additionally, or alternatively, the security platform can be implemented in a private network but nonetheless receive/monitor events that occur on the cloud-based servers” and “[0146 In various embodiments discussed herein, security threats are examples of a type of activity to be detected. It should be understood, however, that the security platform and techniques introduced here can be applied to detect any type of unusual or anomalous activity involving data access, data transfer, network access, and network use regardless of whether security is implicated or not. [0147], In this description the term “event data” refers to machine data related to activity on a network with respect to an entity of focus, such as one or more users, one or more network nodes, one or more network segments, one or more applications, etc.). In certain embodiments, incoming event data from various data sources is evaluated in two separate data paths: (i) a real-time processing path and (ii) a batch processing path. Preferably, the evaluation of event data in these two data paths occurs concurrently. The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats. To operate in real-time, the evaluation is performed primarily or exclusively on event data pertaining to current events contemporaneously with the data being generated by and/or received from the data source(s). In certain embodiments, the real-time processing path excludes historical data (i.e., stored data pertaining to past events) from its evaluation.” Examiner’s note, using the real time processing path to monitor or analyze the real time event data (log data) relates to network activity of user or entity detect the anomaly and thread of the computer device in the network environment, wherein, network environment can be the cloud system. The event data/log data relates to the network activity of user or entity, the network activity can be in the form of an unbounded data stream, therefore, the network activity of the user or entity is considered as the event stream.), wherein each of the plurality of cloud systems comprises a combination of components (Muddu, [Par.0159] FIG. 3 shows a high-level conceptual view of the processing within security platform 102 in FIG. 2. A receive data block 202 represents a logical component in which event data and other data are received from one or more data sources. In an example, receive data block 202 includes application programming interfaces (APIs) for communicating with various data sources. An ETL block 204 is the data preparation component in which data received from the receive data block 202 is pre-processed, for example, by adding data and/or metadata to the event data (a process interchangeably called decoration, enrichment or annotation herein), or otherwise prepared, to allow more effective consumption by downstream data consumers (e.g., machine learning models)..” Examiner’s note, the security platform (cloud computing system) comprises the different components (logical component and the preparation component) to process the received data.) and the plurality of cloud systems present heterogenous system architectures that comprise different mixes of the components (Muddu, [Abstract], “A security platform employs a variety techniques and mechanisms to detect security related anomalies and threats in a computer network environment.” And [Par.0141-0144], “[0141], The security platform can be deployed at any of various locations in a network environment. In the case of a private network (e.g., a corporate intranet), at least part of the security platform can be implemented at a strategic location (e.g., a router or a gateway coupled to an administrator's computer console) that can monitor and/or control the network traffic within the private intranet. In the case of cloud-based application where an organization may rely on Internet-based computer servers for data storage and data processing, at least part of the security platform can be implemented at, for example, the cloud-based servers. Additionally or alternatively, the security platform can be implemented in a private network but nonetheless receive/monitor events that occur on the cloud-based servers. In some embodiments, the security platform can monitor a hybrid of both intranet and cloud-based network traffic. More details on ways to deploy the security platform and its detailed functionality are discussed below...[0144] The security platform may be cloud-based and may employ big data techniques to process a vast quantity of high data rate information in a highly scalable manner. In certain embodiments, the security platform may be hosted in the cloud and provided as a service. In certain embodiments, the security platform is provided as a platform-as-a-service (PaaS). PaaS is a category of cloud computing services enabling customers to develop, run and manage Web applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching such applications. PaaS can be delivered in at least two ways, namely: (i) as a public cloud service from a provider, wherein the consumer controls software deployment and configuration settings and the provider provides the networks, servers, storage devices and other services to host the consumer's application, or (ii) as software installed in private data centers or public infrastructure and managed by internal information technology (IT) departments.” Examiner’s note, the cloud system (security platform) represents the networking system (heterogenous system architectures), wherein, the networking system comprises the different mixes of the components, as it can be seen at [Par.0159] FIG. 3 shows a high-level conceptual view of the processing within security platform 102 in FIG. 2. A receive data block 202 represents a logical component in which event data and other data are received from one or more data sources. In an example, receive data block 202 includes application programming interfaces (APIs) for communicating with various data sources. An ETL block 204 is the data preparation component in which data received from the receive data block 202 is pre-processed, for example, by adding data and/or metadata to the event data (a process interchangeably called decoration, enrichment or annotation herein), or otherwise prepared, to allow more effective consumption by downstream data consumers (e.g., machine learning models). ), processing the generated log data event streams to generate a data set (Muddu, [0146-0147] “[0146 In various embodiments discussed herein, security threats are examples of a type of activity to be detected. It should be understood, however, that the security platform and techniques introduced here can be applied to detect any type of unusual or anomalous activity involving data access, data transfer, network access, and network use regardless of whether security is implicated or not. [0147], In this description the term “event data” refers to machine data related to activity on a network with respect to an entity of focus, such as one or more users, one or more network nodes, one or more network segments, one or more applications, etc.). In certain embodiments, incoming event data from various data sources is evaluated in two separate data paths: (i) a real-time processing path and (ii) a batch processing path. Preferably, the evaluation of event data in these two data paths occurs concurrently. The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats. To operate in real-time, the evaluation is performed primarily or exclusively on event data pertaining to current events contemporaneously with the data being generated by and/or received from the data source(s). In certain embodiments, the real-time processing path excludes historical data (i.e., stored data pertaining to past events) from its evaluation.” And [Par.0165-0166], “The received data is then provided via a channel 314 to a semantic processor (or data preparation stage) 316, which in certain embodiments performs, among other functions, ETL functions. In particular, the semantic processor 316 may perform parsing of the incoming event data, enrichment (also called decoration or annotation) of the event data with certain information, and optionally, filtering the event data. The semantic processor 316 introduced here is particularly useful when data received from the various data sources through data receiver 310 is in different formats, in which case the semantic processor 316 can prepare the data for more efficient downstream utilization (including, for example, by an event processing engine) while avoiding binding the unstructured data into any particular type of data structure.[0166] A parser in the semantic processor 316 may parse the various fields of received event data representing an event (e.g., a record related to a log-in event). An identity resolution (IR) component (not shown in FIG. 4) may be optionally provided within the semantic processor 316 to correlate IP addresses with users, for example. This correlation permits the security platform to make certain assumptions about the relationship between an IP address and a user so that, if any event data arrives from that IP address in the future, an assumption regarding which user is associated with that IP address may be made. In some implementations, the event data pertaining to that IP address may be annotated with the identity of the user. Technology used to implement the data preparation functions of the semantic processor 316 may include Redis™.” Examiner’s note, the event data represents the network activity (e.g. login event) of the user is processed by the semantic processor to generate a particular data format (dataset), wherein, the generated network activity is considered as the generated event stream.), wherein the data set comprises issue labels (Muddu, [Par.0166-0169], “A parser in the semantic processor 316 may parse the various fields of received event data representing an event (e.g., a record related to a log-in event). An identity resolution (IR) component (not shown in FIG. 4) may be optionally provided within the semantic processor 316 to correlate IP addresses with users, for example...[0167] An optional filter attribution block 322 in the semantic processor 316 removes certain pre-defined events. The attribution filter 322 in the semantic processor 316 may further remove events that need not be processed by the security platform. An example of such an event is an internal data transfer that occurs between two IP addresses as part of a regular file backup. In some embodiments, the functions of semantic processor 316 are configurable by a configuration file to permit easy updating or adjusting. Examples of configurable properties of the semantic processor 316 include how to (i) parse events, (ii) correlate between users and IP address, and/or (iii) correlate between one attribute with another attribute in the event data or an external attribute. The configuration file can also adjust filter parameters and other parameters in the semantic processor 316...[0169] The real-time processing path includes an analysis module 330 that receives data from the distribution block 320. The analysis module 330 analyzes the data in real-time to detect anomalies, threat indicators, and threats. In certain embodiments, the aforementioned Storm™ platform may be employed to implement the analysis module 330. In other embodiments, the analysis module could be implemented by using Apache Spark Streaming.” Examiner’s note, the anomaly and thread of the network activity of the entity or user (user, device, application or group of the users) are detected based on the generated dataset (pre-processed event data represents the event login of the user) is generated by semantic processor, therefore, the anomalies, threat are considered as the issue labels.), issues experienced by the plurality of cloud systems (Muddu, [Par.0137], “Introduced here, therefore, is a data processing and analytics system (and, as a particular example, a security platform) that employs a variety of techniques and mechanisms for anomalous activity detection in a networked environment in ways that are more insightful and scalable than the conventional techniques. As is described in more detail below, the security platform is “big data” driven and employs a number of machine learning mechanisms to perform security analytics. More specifically, the security platform introduced here can perform user behavioral analytics (UBA), or more generally user/entity behavioral analytics (UEBA), to detect the security related anomalies and threats, regardless of whether such anomalies and threats are previously known or unknown. Additionally, by presenting analytical results scored with risk ratings and supporting evidence, the security platform can enable network security administrators or analysts to respond to a detected anomaly or threat, and to take action promptly. The behavioral analytics techniques introduced here enable the security platform to detect advanced, hidden and insider threats.” Examiner’s note, the security platform detects the issues (threat or abnormal activity of the network environment), and the security platform is employed in the could computing system, [0144] The security platform may be cloud-based and may employ big data techniques to process a vast quantity of high data rate information in a highly scalable manner. In certain embodiments, the security platform may be hosted in the cloud and provided as a service.” therefore, the issues are detected/experienced by the security platform (could computing system)), extracting features from the generated data set (Muddu, [Par.0166-0167], “A parser in the semantic processor 316 may parse the various fields of received event data representing an event (e.g., a record related to a log-in event). An identity resolution (IR) component (not shown in FIG. 4) may be optionally provided within the semantic processor 316 to correlate IP addresses with users, for example. This correlation permits the security platform to make certain assumptions about the relationship between an IP address and a user so that, if any event data arrives from that IP address in the future, an assumption regarding which user is associated with that IP address may be made. In some implementations, the event data pertaining to that IP address may be annotated with the identity of the user. Technology used to implement the data preparation functions of the semantic processor 316 may include Redis™.[0167] An optional filter attribution block 322 in the semantic processor 316 removes certain pre-defined events. The attribution filter 322 in the semantic processor 316 may further remove events that need not be processed by the security platform. An example of such an event is an internal data transfer that occurs between two IP addresses as part of a regular file backup. In some embodiments, the functions of semantic processor 316 are configurable by a configuration file to permit easy updating or adjusting. Examples of configurable properties of the semantic processor 316 include how to (i) parse events, (ii) correlate between users and IP address, and/or (iii) correlate between one attribute with another attribute in the event data or an external attribute. The configuration file can also adjust filter parameters and other parameters in the semantic processor 316.” Examiner’s note, the features (IP address, particular user, the correlate between the IP address and the user) is extracted from the pre-processed event data, pre-processed event data is generated/pre-processed by the semantic processor. ); and generating, based on the extracted features and the generated data set, issue recommendations using a hybrid plurality of machine learning models (Muddu, [Par.0231-0233], “Accordingly, the security platform introduced here can perform identity resolution based on the facts. The identity resolution module 812 can gain the knowledge by observing the system environment (e.g., based on authentication logs), thereby building the intelligence to make an educated identity resolution determination. That is, the identity resolution module 812 is able to develop user identity intelligence specific and relevant to the system's environment without any explicit user identity information., [Par.0276], “The ML-based CEP engine disclosed herein is advantageous in comparison to conventional CEP engines at least because of its ability to recognize unknown patterns and to incorporate historical data without overburdening the distributed computation system by use of machine learning models. Because the ML-based CEP engine can utilize unsupervised machine learning models, it can identify entity behaviors and event patterns that are not previously known to security experts. In some embodiments, the ML-based CEP engine can also utilize supervised, semi-supervised, and deep machine learning models.” [0232] To facilitate this fact-based identity resolution functionality in the security platform, the identity resolution module 812 can utilize a machine learning model to generate and track a probability of association between a user and a machine identifier. Specifically, after the entities in event data that represents an event are extracted (e.g., by the field mapper 808), the identity resolution module 812 can identify whether the event data includes a user identifier and/or a machine identifier, and can create or update the probability of association accordingly…” and [Par.0278], “The outputs of the machine learning models can be an anomaly, a threat indicator, or a threat. The ML-based CEP engine can present these outputs through one or more output devices, such as a display or a speaker.” Examiner’s note, the machine learning model can identify anomaly or threat based on the user activity or the IP address ( extracted feature) of the user’s devices in the networking system.), that comprise a collaborative pipeline and a content pipeline (Muddu [Par.0278-0279], “The machine learning models enable the ML-based CEP engine to perform many types of analysis, from various event data sources in various contextual settings, and with various resolutions and granularity levels. For example, a machine learning model in the ML-based CEP engine can perform entity-specific behavioral analysis, time series analysis of event sequences, graph correlation analysis of entity activities, peer group analysis of entities, or any combination thereof… Examples of entity-specific behavioral analysis include hierarchical temporal memory processes that employ modified probabilistic suffix trees (PST), collaborative filtering, content-based recommendation analysis, statistical matches in whitelists and blacklists using text models, entropy/randomness/n-gram analysis for uniform resource locators (e.g., URLs), other network resource locators and domains (AGDs), rare categorical feature/association analysis, identity resolution models for entities, land speed violation/geo location analysis, or any combination thereof.”). the collaborative pipeline comprise a first ML model of the machine learning models, that analyzes features of the heterogenous system architectures to generate collaborative recommendations (Muddu, [Par.0231-0233], “Accordingly, the security platform introduced here can perform identity resolution based on the facts. The identity resolution module 812 can gain the knowledge by observing the system environment (e.g., based on authentication logs), thereby building the intelligence to make an educated identity resolution determination. That is, the identity resolution module 812 is able to develop user identity intelligence specific and relevant to the system's environment without any explicit user identity information. [0232] To facilitate this fact-based identity resolution functionality in the security platform, the identity resolution module 812 can utilize a machine learning model to generate and track a probability of association between a user and a machine identifier. Specifically, after the entities in event data that represents an event are extracted (e.g., by the field mapper 808), the identity resolution module 812 can identify whether the event data includes a user identifier and/or a machine identifier, and can create or update the probability of association accordingly…” and [Par.0278], “The outputs of the machine learning models can be an anomaly, a threat indicator, or a threat. The ML-based CEP engine can present these outputs through one or more output devices, such as a display or a speaker.” And [Par.0639], “The analyst recommendation 7106 provides information guiding the user to take action based on the raised anomaly associated with entity 7102. For example, the domain “www.evil.com” has a communication feature score indicative of a high risk to network security due to ongoing unblocked communications. The recommendation 7106, accordingly lists this as a critical priority due to the ongoing and unblocked nature of the communications. In some embodiments, the analyst recommendation 7106 is provided by a human security analyst based on an assessment of the feature scores associated with the entity. In some embodiments, the analyst recommendation is automatically generated by the system based on the feature scores and or the anomaly score, for example through the use of established network security rules.” Examiner’s note, the machine learning model can identify anomaly or threat based on the user activity or the IP address (extracted feature) of the user’s devices in the networking system, such as the machine learning identifies/compares whether the event data includes a user identifier and/or a machine identifier (IP address). Muddu further teaches the recommendation action based on a classified anomaly of the machine system, as it can be seen at [Par.0639]. “ ), the first ML model being trained using iterative factorization and the issue data structure (Muddu, [Par.0630-0632], “[0630], In some embodiments the anomaly model includes model processing logic defining a process for assigning a feature score based on the plurality of feature scores and a model state defining a set of parameters for applying the model processing logic. In some embodiments, the models used to generate the anomaly scores are machine-learning (both supervised and unsupervised) models. For example, a supervised machine learning model may use training examples developed by network security experts to more effectively generate an anomaly score based on the plurality of feature scores. In some embodiments, generating the anomaly score may include an ensemble learning process in which multiple different types of machine learning models are applied to processed the plurality of feature scores. In some embodiments, the anomaly score is a numerical value in a set range. For example, processing the plurality of feature scores according to an anomaly model may yield a value between 0 and 10 with 0 being the least anomalous (or risky) and 10 being the most anomalous (or risky)… In some embodiments, generating the anomaly score may simply involve a calculating a weighted linear combination of feature scores. Recall that an entity profile including a plurality of feature scores may be represented as a feature vector, f={f.sub.1 f.sub.2 f.sub.3 . . . f.sub.n}. In such an embodiment, the anomaly score may simply be represented as: PNG media_image2.png 42 505 media_image2.png Greyscale Wherein w.sub.i is a weighting factor applied to each feature score f.sub.i and wherein the anomaly score is simply the summation of each of the plurality of feature scores with the weighting factor.” Examiner’s note, the machine learning model generate the plurality feature score based on weight factor, such as calculating a weighted linear combination of feature scores, to determine the status of the machine system (anomaly, a threat indicator, or a threat), wherein, the data is representing the status of the network security issue of the computer system, as it can be seen at [Par.0185], “Baseline profiles can be continuously updated (whether in real-time as event data streams in, or in batch according to a predefined schedule) in response to received event data, i.e., they can be updated dynamically and/or adaptively based on event data. If the human user 604 begins to access source code server 610 more frequently in support of his work, for example, and his accessing of source code server 610 has been judged to be legitimate by the security platform 300 or a network security administrator (i.e., the anomalies/threats detected upon behavior change have been resolved and deemed to be legitimate activities), his baseline profile 614 is updated to reflect the updated “normal” behavior for the human user 604.”.), wherein the first ML model is generated by forming a first issues matrix comprising features, logged errors and corresponding error frequency data (Muddu, [Par.0316-0317], “FIG. 21 is a flow diagram illustrating a method 2100 to execute a model deliberation process thread, in accordance with various embodiments. A computation worker executes the model deliberation process thread. In some embodiments, the computation worker execute multiple model training process threads associated with a single model type. In some embodiments, the computation worker execute multiple model-specific process threads associated with a single model type. In some embodiments, the computation worker execute multiple model-specific process threads associated with different model types. At step 2102, the model deliberation process thread processes the most recent time slice from the group-specific data stream to compute a score associated with the most recent time slice. The most recent time slice can correspond to an event or a sequence of event observed at the target computer network. In some embodiments, the group-specific data stream used by the model deliberation process thread is also used by a corresponding model training process thread for the same entity. That is, the model training process thread can train a model state of an entity-specific machine learning model by processing a previous time slice of the group-specific data stream. The model execution engine 1808 can initiate the model deliberation process thread based on the model state while the model training process thread continues to create new versions (e.g., new model states). In some embodiments, the model deliberation process thread can reconfigure to an updated model state without pausing or restarting.[0317] At step 2104, the model deliberation process thread generates a security-related conclusion based on the score. The security-related conclusion can identify the event or the sequence of events corresponding to the time slice as a security-related anomaly, threat indicator or threat. In one example, the model deliberation process compares the score against a constant threshold and makes the security-related conclusion based on the comparison. In another example, the model deliberation process compares the score against a dynamically updated baseline (e.g., statistical baseline) and makes the security-related conclusion based on the comparison.” Examiner’s note, the machine learning model generates the score based on the event data to determine whether the entity is associated with thread indicator or thread. The anomaly is classified as the login error and unusual activity time, as it can be seen at [Par.0447], “In one aspect of the techniques introduced here, the event data is analyzed, via various machine learning techniques as disclosed herein, to identify anomalies from expected or authorized network activity or behavior. An “anomaly” in the context of this description is a detected fact, i.e., it is objective information, whereas a “threat” (discussed further below) is an interpretation or conclusion that is based on one or more detected anomalies. Anomalies can be classified into various types. As examples, anomalies can be alarms, blacklisted applications/domains/IP addresses, domain name anomalies, excessive uploads or downloads, website attacks, land speed violations, machine generated beacons, login errors, multiple outgoing connections, unusual activity time/sequence/file access/network activity, etc. Anomalies typically occur at a particular date and time and involve one or more participants, which can include both users and devices.” and transforming the first issues matrix into two or more sparse matrices comprising weights of how each error relates to each feature(Muddu, [Par.0360], “Process 2500 continues at step 2506 with assigning an anomaly score based on the processing of the event data 2302 through the anomaly model. Calculation of the anomaly score is done by the processing logic contained within the anomaly model and represents a quantification of a degree to which the processed event data is associated with anomalous activity on the network. In some embodiments, the anomaly score is a value in a specified range. For example, the resulting anomaly score may be a value between 0 and 10, with 0 being the least anomalous and 10 being the most anomalous.” Examiner’s note, the machine learning generates the anomaly score to determine anomaly level of the entity based on comparison of the score, therefore, the anomaly score levels are considered as the spare matrices that the comprise the weight range from 0-10 . Feature of the issues experienced by the plurality of cloud systems to generate the content recommendations (Muddu, [Par.0137], “Introduced here, therefore, is a data processing and analytics system (and, as a particular example, a security platform) that employs a variety of techniques and mechanisms for anomalous activity detection in a networked environment in ways that are more insightful and scalable than the conventional techniques. As is described in more detail below, the security platform is “big data” driven and employs a number of machine learning mechanisms to perform security analytics. More specifically, the security platform introduced here can perform user behavioral analytics (UBA), or more generally user/entity behavioral analytics (UEBA), to detect the security related anomalies and threats, regardless of whether such anomalies and threats are previously known or unknown. Additionally, by presenting analytical results scored with risk ratings and supporting evidence, the security platform can enable network security administrators or analysts to respond to a detected anomaly or threat, and to take action promptly. The behavioral analytics techniques introduced here enable the security platform to detect advanced, hidden and insider threats.” Examiner’s note, the security platform detects the issues (threat or abnormal activity of the network environment), and the security platform is employed in the could computing system, [0144] The security platform may be cloud-based and may employ big data techniques to process a vast quantity of high data rate information in a highly scalable manner. In certain embodiments, the security platform may be hosted in the cloud and provided as a service.” therefore, the issues are detected/experienced by the security platform (could computing system), the generating of the machine learning model comprise the content-based recommendation analysis, [Par.0279].), generating the issue recommendations, (Muddu, [Abstract], “A security platform employs a variety techniques and mechanisms to detect security related anomalies and threats in a computer network environment.” And the system update based on one or more of the issue recommendations is used to mitigate at least one system bug or system error at a first cloud system (Muddu, [Par.0150-0151], “This is a consequence of the need to process the voluminous incoming event data quickly to obtain actionable threat information to prevent imminent harm.[0151] The anomalies and threats detected by the real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software and or hardware processes, and the like. In certain embodiments, the discovered anomalies and threats may be presented to a network operator (e.g., a network security administrator or analyst) for decision. As an alternative or in addition to automatically taking action based on the discovered anomalies and threats, the decisions by the user (e.g., that the anomalies and threats are correctly diagnosed, or that the discovered anomalies and threats are false positives) can then be provided as feedback data in order to update and improve the models.” Examiner’s note, the system will be shut down if the anomaly system is detected. the first cloud system comprising one of the plurality of cloud systems or a new cloud system (Muddu, [Par.0141], “The security platform can be deployed at any of various locations in a network environment. In the case of a private network (e.g., a corporate intranet), at least part of the security platform can be implemented at a strategic location (e.g., a router or a gateway coupled to an administrator's computer console) that can monitor and/or control the network traffic within the private intranet. In the case of cloud-based application where an organization may rely on Internet-based computer servers for data storage and data processing, at least part of the security platform can be implemented at, for example, the cloud-based servers. Additionally or alternatively, the security platform can be implemented in a private network but nonetheless receive/monitor events that occur on the cloud-based servers. In some embodiments, the security platform can monitor a hybrid of both intranet and cloud-based network traffic. More details on ways to deploy the security platform and its detailed functionality are discussed below.” ). wherein the generated issue recommendations comprise issue rankings based on a weighted score (Muddu, [Par.0360], “Process 2500 continues at step 2506 with assigning an anomaly score based on the processing of the event data 2302 through the anomaly model. Calculation of the anomaly score is done by the processing logic contained within the anomaly model and represents a quantification of a degree to which the processed event data is associated with anomalous activity on the network. In some embodiments, the anomaly score is a value in a specified range. For example, the resulting anomaly score may be a value between 0 and 10, with 0 being the least anomalous and 10 being the most anomalous.” Examiner’s note, the anomaly score is ranked between the 0 to 10 that corresponding to the issue rankings based on the weight score.) However, Muddu does not teach the first model comprises a first data structure and a second data structure, the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure, the content pipeline comprise a second model of the machine learning models is trained to analyze features, wherein the second ML model comprises a similarity metric and/or a kernel and the collaborative recommendations and the content recommendations are combined to generate the issue recommendations. On the other hand, Cai teaches, the content pipeline is configured comprises, a second model of the machine learning models is trained to analyze features, wherein the second ML model comprises a similarity metric and/or a kernel (Cai, [Par.0066-0067], "Referring back to FIG. 2, in step 212, a recommendation model is generated using a supervised machine-learning algorithm that receives the interaction training data and user vectors as inputs. … [0067] In implementations, model generator 316 may therefore implement a supervised machine-learning algorithm to analyze combined user vectors 342 to determine one or more content recommendations in generating recommendation model 318. Such content recommendations may be compared with actual interaction behavior (e.g., interaction training data 328) to train (or retrain) recommendation model 318”, Examiner’s note, content recommendations are compared with actual interaction behavior to retrain the machine learning model, therefore, the retraining the machine learning model (second model) configure to compare the actual iteration behavior (features). The comparison metric is considered as the similarity metric.). Muddu and Cai are analogous in arts because they have the same filed of endeavor of generating the recommendation by using the machine learning model. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the collaborative recommendations are generated using the collaborative pipeline, the collaborative pipeline is configured comprises a first model of the machine learning models, features of the heterogenous system architectures, the first model being trained using iterative factorization, and the issues experienced by the plurality of cloud systems, as taught by Muddu, to include the content pipeline comprises a second model of the machine learning models is trained to analyze features, as taught by Cai. The modification would have been obvious because one of the ordinary skills in art would be motivated to reduce the computing resource, (Cai, [Par.0026], “In addition to enhancing a graphical user interface, the two-phase model building techniques described herein further enable a reduction of the computing resources for the system responsible for determining or providing content recommendations. For instance, because the system determining or providing the content recommendations may generate a two-phase recommendation model that is configured to generate content recommendations for a period of time (e.g., a week), the frequency at which the model is generated and trained may be decreased, thereby reducing the computing resources needed.”). However, neither Muddu nor Cai teaches the first model comprises a first data structure and a second data structure, the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure, the collaborative recommendations and the content recommendations are combined to generate the issue recommendations. On the other hand, song teaches the collaborative recommendations and the content recommendations are combined to generate the recommendations (Song, [Par.0043-0045], “[0043], Content-based filtering characterizes what a user likes, based on the past history of the user activity and the classification of the items…[0044], Collaborative filtering approaches may infer a user's interests/preferences from that of the other people with similar tastes by considering what the user's is accessing or selecting and concluding that based on that selection the user should like what other people who previously accessed or selected the same thing may have also accessed or selected…[0045], Several recommendation systems use a hybrid approach by combining collaborative and content-based methods, which helps to avoid the above mentioned limitations of content-based and collaborative systems. The content-based and collaborative systems can be combined by including two separated recommenders, adding content-based characteristics to Collaborative Models, adding collaborative-based characteristics to Content Models, or building a unified model.”) Muddu, Cai and song are analogous in arts because they have the same filed of endeavor of generating the recommendation by using the machine learning model. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the combined teaching of Muddu and Cai, of the collaborative recommendations are generated using the collaborative pipeline, the collaborative pipeline is configured to compare, via a first model of the machine learning models, features of the heterogenous system architectures, the first model being trained using iterative factorization, and the issues experienced by the plurality of cloud systems, and the content recommendations are generating using the content pipeline, the content pipeline is configured to compare, via a second model of the machine learning models, features, as set forth above, to include the collaborative recommendations and the content recommendations are combined to generate the recommendations, as taught by Song. The modification would have been obvious because one of the ordinary skills in art would be motivated to improve the recommendation performance , (Song, [Par.0095], “If the clustering method is perfect, each user will get the ideal recommendation by leveraging the users who have most similar interests with him/her. However, it may be difficult, if not impossible, to get the perfect clustering results. Also because of sparsity of the data, the clustering results may not be accurate and reliable. Thus, it may be better to leverage information of both formal and informal communities to improve the recommendation performance.”). However, Muddu, Cai and Song do not teach the first model comprises a first data structure and a second data structure, the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure, On the other hand, Andoni teaches the first model comprises a first data structure and a second data structure (Andoni, [Par.0014], “Turning now to FIG. 1A, the first neural network 110 may be trained, in an unsupervised fashion, to perform clustering. For example, the first neural network 110 may receive first input data 101. The first input data 101 may be part of a larger data set and may include first features 102, as shown in FIG. 1B. The first features 102 may include continuous features (e.g., real numbers), categorical features (e.g., enumerated values, true/false values, etc.), and/or time-series data. In a particular aspect, enumerated values with more than two possibilities are converted into binary one-hot encoded data. To illustrate, if the possible values for a variable are “cat,” “dog,” or “sheep,” the variable is converted into a 3-bit value where 100 represents “cat,” 010 represents “dog,” and 001 represents “sheep.” In the illustrated example, the first features include n features having values A, B, C, N, where n is an integer greater than zero.” Examiner’s note, the first neural network comprise the first input data includes the various type of features (first and second structure data), the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure (Andoni, [Par.0033-0034], “The calculator/detector 130 may initiate adjustment at one or more of the first neural network 110, the second neural network(s) 120, or the third neural network 170, based on the aggregate loss L. For example, link weights, bias functions, bias values, etc. may be modified via backpropagation to minimize the aggregate loss L using stochastic gradient descent. In some aspects, the amount of adjustment performed during each iteration of backpropagation is based on learning rate. In one example, the learning rate, lr, is initially based on the following heuristic:…” Examiner’s note, the weights of the neural network are modified via backpropagation to minimize the loss.) Muddu, Cai, song and Andoni are analogous in arts because they have the same filed of endeavor of generating the issue detection by using the machine learning model. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the training of the first machine learning model by using iterative factorization and the issue data structure, as taught by Muddu, to include the first model comprises a first data structure and a second data structure, the iterative factorization training weights of the first and second data structures to achieve an improved approximation of the issue data structure, as taught by Andoni. The modification would have been obvious because one of the ordinary skills in art would be motivated to minimize the loss, (Andoni, [Par.0033], “The calculator/detector 130 may initiate adjustment at one or more of the first neural network 110, the second neural network(s) 120, or the third neural network 170, based on the aggregate loss L. For example, link weights, bias functions, bias values, etc. may be modified via backpropagation to minimize the aggregate loss L using stochastic gradient descent.”). Regarding claim 3, Muddu teaches the method of claim 1, wherein at least a portion of the heterogenous system architectures comprise independent cloud systems that are hosted in different cloud environments for different cloud customers (Muddu, [Par.0155-0156], “FIG. 2 illustrates a high-level view of an example security platform 102. In FIG. 2, a cloud computing infrastructure is shown, represented in part by a virtualization layer 104. Various cloud computing operating systems or platforms, such as OpenStack™, VMware™, Amazon Web Services™, or Google Cloud™ may be employed in virtualization layer 104 to create public clouds or private clouds. Generally speaking, these cloud computing operating systems and others permit processing and storage to be implemented on top of a set of shared resources. Among its many advantages, cloud computing permits or facilitates redundancy, fault tolerance, easy scalability, low implementation cost and freedom from geographic restrictions. The concept of cloud computing and the various cloud computing operating systems or infrastructures are known.[0156] Above the virtualization layer 104, a software framework layer 106 implements the software services executing on the virtualization layer 104. Examples of such software services include open-source software such as Apache Hadoop™, Apache Spark™, and Apache Storm™ Apache Hadoop™ is an open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. Apache Storm™ is a distributed real-time computation engine that processes data stream record-by-record. Apache Spark™ is an large-scale data processing engine that collects events together for processing in batches. These are only examples of software that may be employed to implement the software framework layer 106.”). Regarding the claim 4, Muddu teaches the method of claim 1, wherein each issue label is defined based on a distinct sequence of log data from the event streams, the distinct sequences being representative of the issue labels (Muddu, [Par.0186-0188], “In certain embodiments, anomalies and threats are detected by comparing incoming event data (e.g., a series of events) against the baseline profile for an entity to which the event data relates (e.g., a user, an application, a network node or group of nodes, a software system, data files, etc.)...[0188], In general, machine data can include performance data, diagnostic information and/or any of various other types of data indicative of performance or operation of equipment (e.g., an action such as upload, delete, or log-in) in a computing system. Such data can be analyzed to diagnose equipment performance problems, monitor user actions and interactions, and to derive other insights like user behavior baseline, anomalies and threats.” Examiner’s note, the anomalies detected based on the comparing in coming event data/machine data (log data) based on the time series associates with the user or application or network node.). The claim 13 is rejected for the same reason as the claim 1, since these claims recite the same limitation. Regarding claim 14 is rejected for the same reason as the claim 3, since these claims recite the same limitation. Regarding claim 15, is being rejected for the same reason as the claim 4, since these claims recite the same limitation. Regarding claim 20, is rejected for the same reason as the claim 1, since these claims recite the same limitation. Additionally, Mudd teaches a non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, cause the processor to generate machine learning recommendations using log data (Muddu, [Par.0747], “Embodiments of the techniques introduced here may be implemented, at least in part, by a computer program product which may include a non-transitory machine-readable medium having stored thereon instructions that may be used to program/configure a computer or other electronic device to perform some or all of the operations described above.”.), Regarding claim 21, Muddu teaches generates the content recommendations by analyzing at least a stack trace for one or more errors related to the issues experienced (Muddu, [Par.0278], “The machine learning models enable the ML-based CEP engine to perform many types of analysis, from various event data sources in various contextual settings, and with various resolutions and granularity levels. For example, a machine learning model in the ML-based CEP engine can perform entity-specific behavioral analysis, time series analysis of event sequences, graph correlation analysis of entity activities, peer group analysis of entities, or any combination thereof. For example, the data sources of the raw event data can include network equipment, application service servers, messaging servers, end-user devices, or other computing device capable of recording machine data. The contextual settings can involve scenarios such as specific networking scenarios, user login scenarios, file access scenarios, application execution scenarios, or any combination thereof. For example, an anomaly detected by the machine learning models in the ML-based CEP engine can correspond to an event, a sequence of events, an entity, a group of entities, or any combination thereof. The outputs of the machine learning models can be an anomaly, a threat indicator, or a threat. The ML-based CEP engine can present these outputs through one or more output devices, such as a display or a speaker.” ). However, Muddu does not teach the second model, On the other hand, Cai teaches the second model (Cai, [Par.0066-0067], "Referring back to FIG. 2, in step 212, a recommendation model is generated using a supervised machine-learning algorithm that receives the interaction training data and user vectors as inputs. … [0067] In implementations, model generator 316 may therefore implement a supervised machine-learning algorithm to analyze combined user vectors 342 to determine one or more content recommendations in generating recommendation model 318. Such content recommendations may be compared with actual interaction behavior (e.g., interaction training data 328) to train (or retrain) recommendation model 318”, Examiner’s note, content recommendations are compared with actual interaction behavior to retrain the machine learning model, therefore, the retraining the machine learning model (second model) configure to compare the actual iteration behavior (features). Muddu and Cai are analogous in arts because they have the same filed of endeavor of generating the recommendation by using the machine learning model. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the generates the content recommendations by analyzing at least a stack trace for one or more errors related to the issues experienced, as taught by Muddu, to include the second model, as taught by Cai. The modification would have been obvious because one of the ordinary skills in art would be motivated to reduce the computing resource, (Cai, [Par.0026], “In addition to enhancing a graphical user interface, the two-phase model building techniques described herein further enable a reduction of the computing resources for the system responsible for determining or providing content recommendations. For instance, because the system determining or providing the content recommendations may generate a two-phase recommendation model that is configured to generate content recommendations for a period of time (e.g., a week), the frequency at which the model is generated and trained may be decreased, thereby reducing the computing resources needed.”). Regarding claim 22, Muddu teaches the method of claim 1, wherein the system update based on the one or more issue recommendations maintains a product or service to anticipate and preempt the at least one system bug or system error (Muddu, [Par.0447], “In one aspect of the techniques introduced here, the event data is analyzed, via various machine learning techniques as disclosed herein, to identify anomalies from expected or authorized network activity or behavior. An “anomaly” in the context of this description is a detected fact, i.e., it is objective information, whereas a “threat” (discussed further below) is an interpretation or conclusion that is based on one or more detected anomalies. Anomalies can be classified into various types. As examples, anomalies can be alarms, blacklisted applications/domains/IP addresses, domain name anomalies, excessive uploads or downloads, website attacks, land speed violations, machine generated beacons, login errors, multiple outgoing connections, unusual activity time/sequence/file access/network activity, etc. Anomalies typically occur at a particular date and time and involve one or more participants, which can include both users and devices.”). Claims 5, 6, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Muddu et al. (Pub. No US20190387007– hereinafter, Muddu) in view of Cai et al (Pub. No. 20200005196 -hereinafter, Cai) and further in view of Song et al. (Pub. No. 20070118498 -hereinafter, song) and further in view of Andoni et al. (Pub. No US 20190228312– hereinafter, Andoni) and further in view of Levin et al (Pub. No. 20180152465-hereinafter, Levin) . Regarding claim 5, Muddu teaches the processing the generated event streams to generate a data set (Muddu, [0146-0147] “[0146 In various embodiments discussed herein, security threats are examples of a type of activity to be detected. It should be understood, however, that the security platform and techniques introduced here can be applied to detect any type of unusual or anomalous activity involving data access, data transfer, network access, and network use regardless of whether security is implicated or not. [0147], In this description the term “event data” refers to machine data related to activity on a network with respect to an entity of focus, such as one or more users, one or more network nodes, one or more network segments, one or more applications, etc.). In certain embodiments, incoming event data from various data sources is evaluated in two separate data paths: (i) a real-time processing path and (ii) a batch processing path. Preferably, the evaluation of event data in these two data paths occurs concurrently. The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats. To operate in real-time, the evaluation is performed primarily or exclusively on event data pertaining to current events contemporaneously with the data being generated by and/or received from the data source(s). In certain embodiments, the real-time processing path excludes historical data (i.e., stored data pertaining to past events) from its evaluation.” And [Par.0165-0166], “The received data is then provided via a channel 314 to a semantic processor (or data preparation stage) 316, which in certain embodiments performs, among other functions, ETL functions. In particular, the semantic processor 316 may perform parsing of the incoming event data, enrichment (also called decoration or annotation) of the event data with certain information, and optionally, filtering the event data. The semantic processor 316 introduced here is particularly useful when data received from the various data sources through data receiver 310 is in different formats, in which case the semantic processor 316 can prepare the data for more efficient downstream utilization (including, for example, by an event processing engine) while avoiding binding the unstructured data into any particular type of data structure.[0166] A parser in the semantic processor 316 may parse the various fields of received event data representing an event (e.g., a record related to a log-in event). An identity resolution (IR) component (not shown in FIG. 4) may be optionally provided within the semantic processor 316 to correlate IP addresses with users, for example. This correlation permits the security platform to make certain assumptions about the relationship between an IP address and a user so that, if any event data arrives from that IP address in the future, an assumption regarding which user is associated with that IP address may be made. In some implementations, the event data pertaining to that IP address may be annotated with the identity of the user. Technology used to implement the data preparation functions of the semantic processor 316 may include Redis™..” Examiner’s note, the event data represents the network activity (e.g. login event) of the user is processed by the semantic processor to generate a particular data format (dataset), wherein, the generated network activity is considered as the generated event stream.), However, Muddu. Cai and Song do not teach the method of claim 4, wherein module identifiers associated with the components that comprise the plurality of cloud systems are determined from the log data, the data set comprises encoding the log data from the event streams with the module IDs. On the other hand, Levin teaches the method of claim 4, wherein module identifiers associated with the components that comprise the plurality of cloud systems are determined from the log data (Levin, [Par.0027], “A security system 130 is configured to collect security events generated by the VMs 111. Such security events may not necessarily indicate a threat, but may rather indicate the activity performed by the VM. A security event designates at least the VM that issued the event and at least one entity causing the event. The security event may also include metadata indicating information such as, but not limited to, a VM ID, an event ID, an event type, an entity, an entity value, time and date, and so on. The entity causing a security event may be indicated using, for example, a domain name, a destination IP address, a process name, a DLL name, and the like. For example, when a VM 111 sends a request to a domain name xyx.com, such a request would trigger a security event. The security event designates the domain name ‘xyx.com’ as an entity. Examples for a security system 130 may include a security information and event management (SIEM) system, a security event management (SEM) system, an event repository, and the like. The data feeds received by the detection device 120 include security events gathered and reported by the security system 130. The following table, Table 1, provides examples for security events that may be included in a data feed: PNG media_image3.png 212 773 media_image3.png Greyscale ” Examiner’s note, each of virtual machine of the cloud system associates with particular ID, that is corresponds to the module identifiers associated with the components. the data set comprises encoding the log data from the event streams with the module identifiers (Levin, [Par.0027], “A security system 130 is configured to collect security events generated by the VMs 111. Such security events may not necessarily indicate a threat, but may rather indicate the activity performed by the VM. A security event designates at least the VM that issued the event and at least one entity causing the event. The security event may also include metadata indicating information such as, but not limited to, a VM ID, an event ID, an event type, an entity, an entity value, time and date, and so on. The entity causing a security event may be indicated using, for example, a domain name, a destination IP address, a process name, a DLL name, and the like. For example, when a VM 111 sends a request to a domain name xyx.com, such a request would trigger a security event. The security event designates the domain name ‘xyx.com’ as an entity. Examples for a security system 130 may include a security information and event management (SIEM) system, a security event management (SEM) system, an event repository, and the like. The data feeds received by the detection device 120 include security events gathered and reported by the security system 130. The following table, Table 1, provides examples for security events that may be included in a data feed: PNG media_image3.png 212 773 media_image3.png Greyscale A TI source 140 provides threat information indicating at least if the cloud-computing infrastructure 110 includes a bot. That is, which of the VM 111 identified as a bot. This type of information is referred to hereinafter as labels. The TI sources 140 may be any security product that can detect vulnerabilities that may indicate bot activity (e.g., a virus scanner). A TI source 140 may be a system or a platform that aggregates reports from multiple security products and provides unified threat information (or labels).” Examiner’s note, the dataset at the table 1 shows the network activity (event stream) of particular VM of particular event in order to detect the threat information of the particular VM in the cloud computing environment. Event ID represents a particular event (log data) at particular timestamp of the particular virtual machine, each of the machine associates with particular ID; therefore, the Event ID is considered as the encoding the log data. ). Muddu, Cai, Song and Levin are analogous in arts because they have the same filed of endeavor of detecting the issue of the computing device in the cloud computing system. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the processing the generated event streams to generate a data set, as taught by Muddu, to include the module IDs associated with the components that comprise the plurality of cloud systems are determined from the log data and the data set comprises encoding the log data from the event streams with the module identifiers, as taught by Levin. The modification would have been obvious because one of the ordinary skills in art would be motivated to indicate the activity performed by the virtual machine and detect the virtual machine’s issue, (Levin, [Par.0027], “A security system 130 is configured to collect security events generated by the VMs 111. Such security events may not necessarily indicate a threat, but may rather indicate the activity performed by the VM. A security event designates at least the VM that issued the event and at least one entity causing the event. The security event may also include metadata indicating information such as, but not limited to, a VM ID, an event ID, an event type, an entity, an entity value, time and date, and so on. The entity causing a security event may be indicated using, for example, a domain name, a destination IP address, a process name, a DLL name, and the like. For example, when a VM 111 sends a request to a domain name xyx.com, such a request would trigger a security event. The security event designates the domain name ‘xyx.com’ as an entity. Examples for a security system 130 may include a security information and event management (SIEM) system, a security event management (SEM) system, an event repository, and the like. The data feeds received by the detection device 120 include security events gathered and reported by the security system 130.”). Regarding claim 6, Muddu teaches the log data but it does not teach the distinct sequences of log data are defined using distinct sequences of module identifiers, On the other hand, Levin teaches the method of claim 5, wherein the distinct sequences of log data are defined using distinct sequences of module identifiers (Levin, [Par.0027], “A security system 130 is configured to collect security events generated by the VMs 111. Such security events may not necessarily indicate a threat, but may rather indicate the activity performed by the VM. A security event designates at least the VM that issued the event and at least one entity causing the event. The security event may also include metadata indicating information such as, but not limited to, a VM ID, an event ID, an event type, an entity, an entity value, time and date, and so on. The entity causing a security event may be indicated using, for example, a domain name, a destination IP address, a process name, a DLL name, and the like. For example, when a VM 111 sends a request to a domain name xyx.com, such a request would trigger a security event. The security event designates the domain name ‘xyx.com’ as an entity. Examples for a security system 130 may include a security information and event management (SIEM) system, a security event management (SEM) system, an event repository, and the like. The data feeds received by the detection device 120 include security events gathered and reported by the security system 130. The following table, Table 1, provides examples for security events that may be included in a data feed: PNG media_image3.png 212 773 media_image3.png Greyscale ” “Examiner’s note, each of particular VM associates with particular VM ID and particular Event ID, therefore, each of Event ID represents a distinct event (log data) of the distinct sequences of VM ID (module identifiers). Muddu and Levin are analogous in arts because they have the same filed of endeavor of detecting the issue of the computing device in the cloud computing system. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified log data, as taught by Muddu, to include the distinct sequences of log data are defined using distinct sequences of module IDs, as taught by Levin. The modification would have been obvious because one of the ordinary skills in art would be motivated to indicate the activity performed by the virtual machine and detect the virtual machine’s issue, (Levin, [Par.0027], “A security system 130 is configured to collect security events generated by the VMs 111. Such security events may not necessarily indicate a threat, but may rather indicate the activity performed by the VM. A security event designates at least the VM that issued the event and at least one entity causing the event. The security event may also include metadata indicating information such as, but not limited to, a VM ID, an event ID, an event type, an entity, an entity value, time and date, and so on. The entity causing a security event may be indicated using, for example, a domain name, a destination IP address, a process name, a DLL name, and the like. For example, when a VM 111 sends a request to a domain name xyx.com, such a request would trigger a security event. The security event designates the domain name ‘xyx.com’ as an entity. Examples for a security system 130 may include a security information and event management (SIEM) system, a security event management (SEM) system, an event repository, and the like. The data feeds received by the detection device 120 include security events gathered and reported by the security system 130.”). Regarding claim 16, is being rejected for the same reason as the claims 5 and 6, since these claims recite the same limitation. Allowable Subject Matter Regarding claims 7-11, 17-19: For claims 7-11, 17-19 no prior art rejection is made for these claims, they are only rejected under 101 rejection as explained above in this office action. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure is provide below. EL-Moussa et al. (Pub. No.:US 20180053002-hereinafter, El-Moussa) teaches using a machine learning to detect the attack of virtual machines in the cloud computing environment. Faigon et al. (Pub. No.:US 20170353477-hereinafter, Faigon) teaches using a machine learning to detect the anomaly issue of the event stream data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EM N TRIEU whose telephone number is (571)272-5747. The examiner can normally be reached on Mon-Fri from 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.T./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/ Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Oct 19, 2020
Application Filed
Nov 18, 2023
Non-Final Rejection — §101, §103
Feb 29, 2024
Response Filed
Jun 28, 2024
Final Rejection — §101, §103
Sep 10, 2024
Response after Non-Final Action
Sep 16, 2024
Examiner Interview (Telephonic)
Sep 16, 2024
Response after Non-Final Action
Oct 10, 2024
Request for Continued Examination
Oct 21, 2024
Response after Non-Final Action
Nov 20, 2024
Non-Final Rejection — §101, §103
Mar 04, 2025
Response Filed
Mar 04, 2025
Applicant Interview (Telephonic)
Mar 06, 2025
Examiner Interview Summary
Jun 17, 2025
Final Rejection — §101, §103
Aug 20, 2025
Response after Non-Final Action
Sep 19, 2025
Request for Continued Examination
Oct 04, 2025
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §101, §103
Apr 01, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572779
INTERFACE NEURAL NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12541705
SYSTEM AND METHOD FOR FACILITATING A MACHINE LEARNING MODEL REBUILD
2y 5m to grant Granted Feb 03, 2026
Patent 12511531
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493804
METHOD OF BUILDING AND OPERATING DECODING STATUS AND PREDICTION SYSTEM
2y 5m to grant Granted Dec 09, 2025
Patent 12493774
NEURAL NETWORK OPERATION MODULE AND METHOD
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
48%
Grant Probability
53%
With Interview (+5.0%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 63 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month