Prosecution Insights
Last updated: April 19, 2026
Application No. 18/181,951

AUTOMATED TRIAGE OF CODE FLAWS WITH MACHINE LEARNING

Non-Final OA §101§103
Filed
Mar 10, 2023
Examiner
CHOI, DAVID E
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Veracode Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
448 granted / 595 resolved
+20.3% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
18 currently pending
Career history
613
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
1.9%
-38.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This action is responsive to the following communication: Original claims filed 03/10/23. This action is made non-final. 3. Claims 1-20 are pending in the case. Claims 1, 9 and 15 are independent claims. Claim Objections 4. Claims 6-8, 13-14 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 101 Claim Rejections - 35 USC § 101 5. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under Step 2A prong 1, “generating a feature vector from data for a flaw detected in a codebase of an organization” discloses an abstract idea -mental process. Under Step 2A Prong 2: The additional element “inputting the feature vector into a machine learning model to obtain from output a plurality of likelihoods of performing each of a plurality of triage decisions in response to detecting the flaw” can be categorized as insignificant extra solution activity of mere data gathering and therefore does not integrate into a practical application. MPEP 2106.05(g). The additional element “wherein the machine learning model was trained to output likelihoods of each of the plurality of triage decisions for flaws previously detected by the organization” can be categorized as generally linking the use of the judicial exception to a field of use/technological environment and therefore does not integrate into a practical application. MPEP 2106.05(h). The additional element “indicating one or more of the triage decisions corresponding to highest one or more of the plurality of likelihoods” can be categorized as insignificant extra solution activity of mere data gathering and therefore does not integrate into a practical application. MPEP 2106.05(g). Under Step 2B: The additional element “inputting the feature vector into a machine learning model to obtain from output a plurality of likelihoods of performing each of a plurality of triage decisions in response to detecting the flaw” can be categorized as well understood, routine and conventional activity of “transmitting or receiving data over a network” and therefore does not provide significantly more. MPEP 2106.05(d)(ii) The additional element “wherein the machine learning model was trained to output likelihoods of each of the plurality of triage decisions for flaws previously detected by the organization” can be categorized as generally linking the use of the judicial exception to a field of use/technological environment and therefore does not provide significantly more. MPEP 2106.05(h). The additional element “indicating one or more of the triage decisions corresponding to highest one or more of the plurality of likelihoods” can be categorized as well understood, routine and conventional activity of “transmitting or receiving data over a network” and therefore does not provide significantly more. MPEP 2106.05(d)(ii) Regarding dependent claims 2-8, 10-14 and 16-20, the aforementioned claims depend directly or indirectly from claim 1, 9 and 15 as discussed above and therefore include all the limitations of claims 1, 19 and 15. Therefore, the claims recite the same abstract idea as described in the in the independent claims. The claims recite additional limitations but do not otherwise add meaningful limitations beyond that same abstract idea. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kawaguchi (US 20250245566) in view of Sharma (US 20230409464). Regarding claim 1, Kawaguchi discloses wherein method comprising: generating a feature vector from data for a flaw detected in a codebase of an organization (the error determination part 124 receives the classification result, the feature vector of the estimation process, and the estimation probability for each class, and determines whether the classification estimated by the classification estimation part 110 is “correct” or “incorrect” based on the received information. Note that, in the determination, only one of the feature vector of the estimation process and the estimation probability for each class may be used, see paragraph 0046); inputting the feature vector into a machine learning model to obtain from output a plurality of likelihoods of performing each of a plurality of triage decisions in response to detecting the flaw, wherein the machine learning model was trained to output likelihoods of each of the plurality of triage decisions for flaws previously detected by the organization (the error determination part 124 receives the classification result, the feature vector of the estimation process, and the estimation probability for each classification from the classification estimation part 110, the classification estimation process observation part 121, and the classification probability estimation part 123, respectively, and determines whether the classification estimated by the classification estimation part 110 is “correct” or “incorrect” based on the received information (see paragraph 0033, see also paragraph 0119 machine learning model by having a feature vector list obtained by adding at least a second estimation process feature vector obtained from data different from classification object data to a first estimation process feature vector obtained from the classification object data as input to the machine learning model, and by using a classification ratio vector list in which at least a second classification ratio vector different from a first classification ratio vector, being a correct answer to the classification object data, has been added to the first classification ratio vector as a correct answer to the input to the machine learning model); and indicating one or more of the triage decisions corresponding to highest one or more of the plurality of likelihoods (In addition, the error determination part 123 outputs the error determination result, the classification result, and an estimated probability vector for each class as a result of the whole system. It is also acceptable to output only some of the error determination result, the classification result, and the estimated probability vector for each class, see paragraph 0033). Further, Sharma discloses wherein although the clustering model was trained with feature vectors of fixes, the feature vectors encoded structural context information of a fix for a flaw type. The feature vector of the flaw will most likely encode a structural context similar to that of one or more fixes for flaws of the same type. This clustering also allows discrimination between fixes of a same flaw type in different structural contexts (see paragraph 0080). The combination of Kawaguchi and Sharma would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Sharma’s teachings of utilizing feature vectors of flaws to find related context information for a fix. One would have been motivated to have combined the teachings because a user of Kawaguchi would have benefited from Sharma’s teachings of clustering models as both are related to providing higher probability of related fixes/corrections. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. Regarding claim 9, the subject matter of the claim is substantially similar to claim 1 and as such the same rationale of rejection applies. Kawaguchi further discloses wherein receiving indications of a flaw detected in a codebase of an organization (the error determination part 124 receives the classification result, the feature vector of the estimation process, and the estimation probability for each class, and determines whether the classification estimated by the classification estimation part 110 is “correct” or “incorrect” based on the received information. Note that, in the determination, only one of the feature vector of the estimation process and the estimation probability for each class may be used, see paragraph 0046); Regarding claim 15, the subject matter of the claim is substantially similar to claim 1 and as such the same rationale of rejection applies. Kawaguchi further discloses a processor; and a machine-readable medium having instructions stored thereon that are executable by the processor (see at least FIG. 5). 8. Claim 2, 3, 10, 11 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kawaguchi (US 20250245566) in view of Sharma (US 20230409464) in view of Johnson (US 20230117120). Regarding claim 2, Kawaguchi does not disclose wherein the machine learning model comprises a naive Bayes classifier. However, Johnson discloses wherein the machine learning model may be a supervised learning machine learning model, a semi-supervised machine learning model, an unsupervised machine learning model, or a reinforcement machine learning model. Examples of machine learning model algorithms include a Naive Bayes classifier algorithm, K Means clustering algorithm, support vector machine algorithm, linear regression, logistic regression, artificial neural networks, decision trees, random forests, nearest neighbors, etc. (see paragraph 0009). The combination of Kawaguchi and Johnson would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Johnson’s teachings of utilizing a naive Bayes classifier as a type of learning model algorithm. One would have been motivated to have combined the teachings because a user of Kawaguchi is already interested in machine learning algorithms to reinforce corrections and utilizing a Bayes classifier would have assisted in that same goal. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. Regarding claim 3, Kawaguchi does not disclose further comprising indicating frequency of each of the plurality of triage decisions for previously detected flaws in the codebase of the organization corresponding to feature vectors having at least one feature value matching the feature vector for the flaw. However, Sharma discloses wherein a matching pattern is associated with a fix template in the fix templates repository 121. For example, the remediation agent 117 can search the fix template repository 121 for code flaw patterns that match one or more of the detected flaws, which may be a complete or partial match of code corresponding to the flaw. If a match is found, then the template repository 121 returns a template that specifies how to remediate the code (e.g., code string(s) to replace, code to remove, and/or code to insert). The patterns can be based on source code or an intermediate representation of the source code corresponding to the flaw, and the patterns can include wildcards. The remediation agent may parameterize the target program code using wildcards or generic placeholders and/or interpolation applied before or as part of the matching rules (paragraph 0026). The combination of Kawaguchi and Sharma would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Sharma’s teachings of utilizing feature vectors of flaws to find related context information for a fix. One would have been motivated to have combined the teachings because a user of Kawaguchi would have benefited from Sharma’s teachings of clustering models as both are related to providing higher probability of related fixes/corrections. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. Regarding claim 10, the subject matter of the claim is substantially similar to claim 2 and as such the same rationale of rejection applies. Regarding claim 11, the subject matter of the claim is substantially similar to claim 3 and as such the same rationale of rejection applies. Regarding claim 16, the subject matter of the claim is substantially similar to claim 2 and as such the same rationale of rejection applies. Regarding claim 17, the subject matter of the claim is substantially similar to claim 3 and as such the same rationale of rejection applies. 9. Claim 4, 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kawaguchi (US 20250245566) in view of Sharma (US 20230409464) in view of Li (US 20240256951). Regarding claim 4, Kawaguchi does not disclose wherein the feature vector comprises a count vectorization of tokens in features of the data for the flaw. However, Li discloses wherein vectorization techniques can be used by the text similarity tool 504 of FIG. 5. In that context, vectorization refers to word count vectorization of textual tokens. However, text similarity through comparing word count vectors does not necessarily convey semantics or meaning. Contrastingly, vectorization through embedding with a graph-based neural network can result in emergent properties that resemble semantics (see paragraph 0180). The combination of Kawaguchi and Li would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Li’s teachings of utilizing tokenization of the vectors for comparison of data. One would have been motivated to have combined the teachings because a user of Kawaguchi would have benefited from Li’s teachings of parsing out data to enable more efficient comparisons which would provide higher probability of related fixes/corrections. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. Regarding claim 5, Kawaguchi does not disclose wherein the features of the data for the flaw comprise at least one of a common weakness enumeration identifier, a method, a filename, a file line, and a file extension. However, Li discloses wherein vectorization techniques can be used by the text similarity tool 504 of FIG. 5. In that context, vectorization refers to word count vectorization of textual tokens. However, text similarity through comparing word count vectors does not necessarily convey semantics or meaning. Contrastingly, vectorization through embedding with a graph-based neural network can result in emergent properties that resemble semantics (see paragraph 0180). The combination of Kawaguchi and Li would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Li’s teachings of utilizing tokenization of the vectors for comparison of data. One would have been motivated to have combined the teachings because a user of Kawaguchi would have benefited from Li’s teachings of parsing out data to enable more efficient comparisons which would provide higher probability of related fixes/corrections. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. Regarding claim 12, the subject matter of the claim is substantially similar to claim 4 and as such the same rationale of rejection applies. 10. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kawaguchi (US 20250245566) in view of Sharma (US 20230409464) in view of Johnson in view of Li (US 20240256951). Regarding claim 18, Kawaguchi does not disclose wherein the instructions to determine a plurality of likelihoods of performing the plurality of triage decisions for the flaw with the machine learning model comprise instructions executable by the processor to cause the apparatus to, generate a feature vector comprising a count vectorization of feature values from data for the flaw; and input the feature vector into the naive Bayes classifier to output the plurality of likelihoods. However, Johnson discloses wherein the machine learning model may be a supervised learning machine learning model, a semi-supervised machine learning model, an unsupervised machine learning model, or a reinforcement machine learning model. Examples of machine learning model algorithms include a Naive Bayes classifier algorithm, K Means clustering algorithm, support vector machine algorithm, linear regression, logistic regression, artificial neural networks, decision trees, random forests, nearest neighbors, etc. (see paragraph 0009). The combination of Kawaguchi and Johnson would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Johnson’s teachings of utilizing a naive Bayes classifier as a type of learning model algorithm. One would have been motivated to have combined the teachings because a user of Kawaguchi is already interested in machine learning algorithms to reinforce corrections and utilizing a Bayes classifier would have assisted in that same goal. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. However, Li discloses wherein vectorization techniques can be used by the text similarity tool 504 of FIG. 5. In that context, vectorization refers to word count vectorization of textual tokens. However, text similarity through comparing word count vectors does not necessarily convey semantics or meaning. Contrastingly, vectorization through embedding with a graph-based neural network can result in emergent properties that resemble semantics (see paragraph 0180). The combination of Kawaguchi and Li would have resulted in the feature vectors and correction methods of Kawaguchi to further incorporate Li’s teachings of utilizing tokenization of the vectors for comparison of data. One would have been motivated to have combined the teachings because a user of Kawaguchi would have benefited from Li’s teachings of parsing out data to enable more efficient comparisons which would provide higher probability of related fixes/corrections. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have created a predictable invention. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID E CHOI whose telephone number is (571)270-3780. The examiner can normally be reached on M-F: 7-2, 7-10 (PST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bechtold, Michelle T. can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID E CHOI/Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Mar 10, 2023
Application Filed
Nov 20, 2025
Non-Final Rejection — §101, §103
Apr 01, 2026
Interview Requested
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602396
TRANSFORMING MODEL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12585995
Capturing Data Properties to Recommend Machine Learning Models for Datasets
2y 5m to grant Granted Mar 24, 2026
Patent 12585957
SYSTEM AND METHOD FOR EFFICIENT ESTIMATION OF CUMULATIVE DISTRIBUTION FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12580878
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR PRESENTING SESSION MESSAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12572836
INTELLIGENT PROVISIONING OF QUANTUM PROGRAMS TO QUANTUM HARDWARE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+12.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month