The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claim(s) 1-20 has/have been examined.Claim(s) 1-20 have been rejected.
Response to Arguments
Applicant's arguments filed 2/17/26 have been fully considered but are moot in view of the new grounds of rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 as being directed to an abstract idea without significantly more.
Below is an evaluation using the 2019 Revised Patent Subject Matter Eligibility Guidance.
Regarding claim 1, Step 1 is satisfied because a series of instructions are a process.
At step 2a prong 1, an abstract idea is recited: steps of the claim could be performed as a mental process. These steps include obtaining a defect record, converting the categorical-format defect record data to numerical-format defect record data, generate numerical-format model prediction data based on the numerical-format defect record data, convert the numerical-format model prediction data to categorical-format defect root cause data, and generate a root cause report based on the defect root cause data and the categorial-format defect record data.
At step 2a prong 2, additional elements that integrate the judicial exception into a practical application are not recited. The claim recites a non-transitory computer-readable storage medium, a hardware processor. These elements do not integrate the judicial exception into a practical application (step 2a prong 2) because they only apply the mental process to a generic computer system. These elements also do not amount to significantly more than the judicial exception (step 2b) because they are conventional computing devices which are only generally linked to the abstract idea without meaningfully limiting the mental process. The claim recites use of a machine learning model. This limitation does not integrate the judicial exception into a practical application (step 2a prong 2) or amount to significantly more than the judicial exception (step 2b) because the machine learning model is a generic component and, like a generic computer, may be used for automating what would otherwise be a mental process. In this case the implementation of the machine learning model is generic and only generally links the judicial exception to a particular technological environment or field of use. See MPEP §§ 2106.04(d), 2106.05(h).Regarding claims 2, 4, 7, 8, 9, these claims recite additional limitations of the mental process but their inclusion does not push the mental process beyond what can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper. See MPEP § 2106.04(a)(2)(III). The claims do not recite additional elements which must be evaluated in step 2a prong 2 or step 2b.
Regarding claim 3, this claim recites extracting supervised and unsupervised generated labels, and creating a vector record for each unique word, the vector record being stored in a database. The extracting labels and creating vectors can be performed as a mental process with or without the use of a physical aid such as pen and paper. See MPEP § 2106.04(a)(2)(III). Use of a database does not integrate the judicial exception into a practical application (step 2a prong 2) because it only applies the mental process to a generic computer system. The database does not amount to significantly more than the judicial exception (step 2b) because it is a conventional computing devices which is only generally linked to the abstract idea without meaningfully limiting the mental process
Regarding claims 5 and 6, these claims recite training and retraining a machine learning model. As described for claim 1, use of a machine learning model does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. The training of a machine learning model is akin to programming a computer to perform the process. In the same way that installing specific programming into a computer to perform an otherwise mental process does not integrate the judicial exception or amount to significantly more than the judicial exception, training of a machine learning model only allows the machine learning model to perform the otherwise mental process and does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claims 10-18, these claims recite limitations found in claims 1-9, respectively, and are respectively rejected on the same grounds as claims 1-9.
Regarding claims 19 and 20, these claims recite limitations found in claims 1 and 5, respectively, and are respectively rejected on the same grounds as claims 1 and 5.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Roy (PG-PUB 2022/0327012) in view of Patil (PG-PUB 2018/0307904).
Regarding claim 1, Roy discloses an apparatus configured for of determining a root cause of a logged defect in a software environment, the apparatus comprising: a non-transient computer-readable storage medium having executable instructions embodied thereon; and
one or more hardware processors configured to execute the instructions to:
obtain, by the apparatus, a defect record associated with the logged defect (paragraph 51, application error logs are analyzed);
convert the defect record data to numerical-format defect record data suitable for use by a machine learning model (paragraph 72, remaining words are encoded as integers for use as inputs to machine learning algorithms);
generate, using the machine learning model, numerical-format model prediction data based on the numerical-format defect record data (paragraph 51, a prediction of a root cause of a failure is generated);
generate, whenever a defect is logged, a defect root cause report based on the defect root cause data and on the defect record data (paragraph 51, the user interface displays the prediction of the root cause of an error; Figure 4 shows the display of a root cause includes a readable text explanation. Note that the root cause prediction is done during SWAT activity in real-time (paragraph 51), and thus when a defect is logged, the root cause analysis is performed).
Roy does not expressly disclose the apparatus wherein the defect record comprises categorical-format current defect record data formatted for identifying a category of defect associated with a plurality of defects, and the category of defect being associated with a categorical value.
Patil teaches a software defect report analysis that classifies software defect reports into various categories (paragraph 6).Prior to the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the software validation system disclosed by Roy such that detected software defects are classified, as taught by Patil. This modification would have been obvious because classifying defects helps to streamline the defect management process, identify patterns in the defect reports, perform faster root cause analysis (Patil paragraph 3).
Roy does not expressly disclose the apparatus wherein the instructions are to: convert the numerical-format model prediction data to categorical-format defect root cause data comprising categorical data and wherein the categorical data is assigned to a numerical value.
Roy teaches that failed application logs are parsed and encoded as integers or floating-point values for use as input to machine learning algorithms (paragraph 72). Roy also teaches that a root cause is displayed to a user as readable, categorical-format data of informational text (Figure 4).
Prior to the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the root cause prediction system disclosed by Roy such that prediction data generated by a machine learning model is converted from numerical-format and input to the machine learning model is converted to a numerical format. This modification would have been obvious because the machine learning model operates using converted numerical data (paragraph 72) while the eventual output is displayed using text information (Figure 4), the outputting of such text necessitates a conversion to categorical-format (readable text) data.
Regarding claim 2, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
eliminate common spoken words from the defect record (Roy paragraph 72) and store remaining data in a table with auto generated labels (Roy paragraphs 75 and 76, remaining words are input into the machine learning and associated with generated labels corresponding to categories associated with a root cause of a failure); and
categorize the remaining data with auto generated labels with a stored value (Roy paragraphs 75 and 76, the labels correspond to categories associated with root causes of a failure and the true/false results of the labels are counted).
Regarding claim 3, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
extract unsupervised generated labels (paragraph 56, 69 and 70, the MLM is trained over time via feedback loop that provides reinforcement training);
create a vector record for each unique word in comments or description fields of the logged defect while extracting components and other data (paragraph 72, remaining words are encoded as numerical values into the machine learning algorithm; paragraph 73, the system can utilize TF-IDF transformation which increases weights for more unique words).
Roy does not expressly disclose the apparatus wherein supervised labels are extracted or that the vector record values are stored in a database.
The examiner has taken official notice that use of supervised training is well known in the art.
Prior to the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the MLM analysis disclosed by Roy such that the MLM is further trained on supervised training data. This modification would have been obvious because, as would be clear to one of ordinary skill in the art, supervisor training is useful in improving the accuracy of a MLM classifier.
The examiner has taken official notice that it is well known in the art to store MLM training data in a database.
Prior to the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the MLM analysis disclosed by Roy such that the data used for training the MLM is stored in a database. This modification would have been obvious because, as would be clear to one of ordinary skill in the art, a database allows for organized data storage and easier manipulation of the data stored.
Regarding claim 4, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
implement machine learning model with linear activation for model generation and prediction (Roy paragraphs 71 and 72 describe using TF-IDF transformation to weight words based on rarity and implied importance to the analysis; because the words are pre-weighted before MLM analysis, the activation of the MLM based on this input is implied to be linear with the weighted values), the predicted data then transformed into categorial data from vector data by reversing the process used to convert to vector values (as described above for claim 1, the system encodes written log text as integers or floating-point values for use by the machine learning algorithm; since the resulting data is written out as text (as in Roy Figure 4), the system performs a conversion of integer or floating-point results to written text, which is a reverse of the earlier encoding).
Roy in view of Patil does not expressly disclose the apparatus wherein the data is imported into a CSV file and consumed by the machine learning model.
The examiner has taken official notice that using CSV files to input data in a MLM is well known in the art.
Prior to the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the MLM analysis disclosed by Roy such that data is input via CSV files. This modification would have been obvious because, as would be clear to one of ordinary skill in the art, CSV files are simply formatted and easy to generate and manipulate.
Regarding claim 5, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
train the machine learning model using historical defect data and root cause data (Roy paragraph 56, 69 and 70, the MLM is trained over time via feedback loop that provides reinforcement training; paragraph 73, the training is based on a training corpus of log data; while not expressly described as historical data, it is well known in the art to utilize recorded historical data for training and one of ordinary skill in the art would be motivated to do so because a historical data set is easier to obtain and may be more applicable than an artificially generated training set); and
predict the root cause of the logged defect and generating the defect root cause data report based on the current defect record data (Roy paragraphs 74-76, a root cause prediction is generated based on the log information analyzed).
Regarding claim 7, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
obtain, prior to generating the prediction data, an identification of software components impacted by the logged defect (Roy paragraph 51, the user display shows the prediction of the root cause and the error for a crashed application; the system also identifies that a Gemfire host is down).
Regarding claim 8, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
predict, using the machine learning model, one or more of components, environments or patterns associated with the root cause of the logged defect (Roy paragraph 51, the prediction identifies the error and the associates the root cause with a crashed application and a host failure).
Regarding claim 9, Roy in view of Patil discloses the apparatus of claim 1 wherein the one or more hardware processors are further configured to execute the instructions to:
generate the defect root cause report including one or more of a predicted root cause associated with a defect identifier (Roy paragraph 51, a root cause display is generated that reports a problem along with the root cause), component references associated with the defect identifier, and a predicted root cause (Roy paragraph 51).
Regarding claims 10-14 and 16-18, these claims recite limitations found in claims 1-5 and 7-9, respectively, and are respectively rejected on the same grounds as claims 1-5 and 7-9.
Regarding claims 19 and 20, these claims recite limitations found in claims 1 and 5, respectively, and are respectively rejected on the same grounds as claims 1 and 5.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Roy in view of Patil and Ultimate Guide to Model Retraining by Luigi Patruno.
Regarding claim 6, Roy in view of Patil discloses the apparatus of claim 5. Roy in view of Patil does not expressly disclose the apparatus wherein the one or more hardware processors are further configured to execute the instructions to:
retrain the machine learning model using the current defect record data.
Patruno teaches performing periodic retraining based on a detection of model drift of a machine learning model (page 8).Prior to the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the machine learning model root cause analysis, as taught by Roy, such that the machine learning model implements periodic retraining, as taught by Patruno. This modification would have been obvious because a model’s predictive performance will decline after it is deployed to production and degraded performance will result unless the model is retrained (Patruno, Conclusion section).
Regarding claim 15, this claim recites limitations found in claim 6 and is rejected on the same grounds as claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gomes Pereira teaches utilizing association rules analysis and recursive neural networks to identify root causes of errors from log analysis and predict errors ahead of their occurring. Revanna teaches using a machine learning model to analyze error logs in components deployed within a cloud environment, determining a root cause of errors and output a corrective measure to address the error.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH SCHELL whose telephone number is (571) 272-8186. The examiner can normally be reached on Monday through Friday 9AM-5:00PM (Pacific Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Please note that all agendas or related documents that Applicant would like reviewed should be sent at least one full business day (i.e. 24 hours not including weekends or holidays) before the interview.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at (571) 272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. The fax phone number for the examiner is 571-273-8186. The examiner may be e-mailed at joseph.schell@uspto.gov though communications via e-mail are not permitted without a written authorization form (see MPEP 502.03).
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JS/JOSEPH O SCHELL/Primary Examiner, Art Unit 2114