Prosecution Insights
Last updated: April 19, 2026
Application No. 18/533,713

Scanning of Training Code to Prevent Vulnerabilities in Artificial Intelligence (AI) Generated Source Code

Final Rejection §103
Filed
Dec 08, 2023
Examiner
REYNOLDS, DEBORAH J
Art Unit
2400
Tech Center
2400 — Computer Networks
Assignee
Micro Focus LLC
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
111 granted / 166 resolved
+8.9% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
80 currently pending
Career history
246
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§103
DETAILED ACTION This is a final Office action in response to communications received on 01/12/2026. Claims 1, 3, 10, 12, and 19 are amended. Claims 2 and 11 have been canceled. Claims 21-22 are added. Claims 1, 3-10, 12-22 are examined and are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments filed 01/12/2026, to claims 1, 10, and 19 have been fully considered. Applicant’s Remarks regarding 103 have been considered, but have not been found persuasive. Consequently, the rejection of the claims under 35 U.S.C. § 103 is sustained. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicant argues on page 8 on the Remarks that several features in the pending claims are neither taught nor suggested, either expressly or inherently, in any of the cited prior art references without specifically pointing out the features not taught by the prior arts. Applicant argues on page 9 on the Remarks that Sumedrea fails to disclose, teach or suggest scanning based on any of the recited analysis techniques, either individually or in combination. However, the Examiner respectfully disagrees. Previously presented claim 2 recites: “scanning the initial corpus of source code using the test suite is based on at least one of: static source code analysis, malware scanning, dynamic source code analysis, software composition analysis, runtime analysis, and license analysis”. And Sumedrea discloses scanning initial source code to identify vulnerability, and performs a known signature-based vulnerability detection method using a vulnerability database. Compiling and executing snippets to analyze behavior as dynamic analysis. Checking dependencies against a vulnerability database. For further support please see Sumedrea Paras. [0031]-[0033], [0036]-[0037], [0047], which describing scanning initial source code, using vulnerability scanner, accessing a vulnerability scanner, analyzing dependencies, compiling snippets, and identifying vulnerabilities based on the performed analysis. With respect to Applicant’s arguments regarding claims 10 and 19 without presenting additional arguments, a similar response applies. The remaining arguments regarding the dependent claims on page 9 on the remarks with respect to independent claims without presenting additional arguments, a similar response applies. The remaining arguments fail to comply with 37 C.F.R. 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Claim Objections Claims 1, 3, 8, 10, 12, 17, 19 and 21 are objected to because of the following informalities: The Claims recite “the initial corpus of source code”, multiple times which are not clear whether the initial corpus refers to the retrieved corpus of the source code. Appropriate corrections are required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, 8, 10, 13-14, 17, 19-20 are rejected under 35 U.S.C. 103 over Seck (US 2025/0156531) in view of Sumedrea (US 2025/0139251). Regarding claims 1, Seck teaches the limitations of claim 1 as follows: A system comprising: a microprocessor; and a computer readable medium, coupled with the microprocessor and comprising microprocessor readable and executable instructions that, when executed by the microprocessor, cause the microprocessor to: retrieve an initial corpus of source code, wherein the initial corpus of source code is for training an Artificial Intelligence (AI) algorithm; (Seck, Paras. [0010]-[0011], [0063]-[0066], Fig. 1, the system retrieve source code as part of its training data, which is used for training an AI/machine learning model (vulnerability detection 114), where the training data 132 (i.e., initial corpus of source code) includes source code and associated runtime data flow). scan the initial corpus of source code, using a test suite, to identify one or more potential vulnerabilities in the initial corpus of the source code; (Seck, Paras. [0010]-[0011], [0026]-[0028], [0063]-[0066], Figs. 1-2, a development environment software system (i.e., a test studio) analyzes/scans or transmits the corpus of source code and runtime data flow to a vulnerability detection model that identifies vulnerabilities in the source code. The development environment 200 displays replacement libraries for each vulnerability library). Mitigate …. the initial corpus of the source code, (Seck, Paras. [0010]-[0019], [0026]-[0028], [0063]-[0066], Figs. 1-2, suggesting a replacement set to a user as a mitigation step (replacing vulnerable libraries)). Seck does not explicitly disclose: mitigate the identified one or more potential vulnerabilities in the initial corpus of the source code by modifying the retrieved initial corpus of source code to produce a first training corpus of source code derived from the retrieved initial corpus of source code; train the Al algorithm using the first training corpus of source code. wherein scanning the initial corpus of source code using the test suite is based on at least one of: malware scanning, dynamic source code analysis, software composition analysis, runtime analysis, and license analysis. However, Sumedrea in the same field of endeavor discloses: mitigate the identified one or more potential vulnerabilities in the initial corpus of the source code by modifying the retrieved initial corpus of source code to produce a first training corpus of source code derived from the retrieved initial corpus of source code; (Sumedrea, Paras. [0018], [0025]-[0026], [0044]-[0048], the initial source code has been processed and refactored (i.e., mitigate) to remove vulnerabilities, then this new set becomes the first training corpus for the model. Paras. [0036]-[0041], [0048], discloses modification of the initial source code to remove vulnerabilities. Paras. [0018], [0043]-[0044], discloses that refactored source code (i.e., modified initial source code), becomes new training source code used for training (i.e., training corpus derived from the initial source code). Paras. [0036]-[0037], [0043], [0048], discloses that the refactored code is generated from the initial source code by removing vulnerabilities which necessarily derived from the initial source code). Please note that retrieving the source code is taught by Seck. train the Al algorithm using the first training corpus of source code. (Sumedrea, Paras. [0018], [0025]-[0026], [0044]-[0048], using the first training corpus to train the model). wherein scanning the initial corpus of source code using the test suite is based on at least one of: malware scanning, dynamic source code analysis, software composition analysis, runtime analysis, and license analysis. (Sumedrea, Paras. [0016]-[0018], [0031]-[0032], [0035]-[0036], [0044]-[0048], scanning is based on source code analysis, analyzing source code text/artifacts, structure, and vulnerability pattern (i.e., at least one of : static source code analysis …..). Para. [0020], discloses scanning initial source code to identify vulnerabilities, and performs a known signature-based vulnerability detection using a vulnerability database (i.e., malware scanning). Paras. [0033]-[0037], Discloses runtime and contextual evaluation, compiling and executing snippets (i.e., dynamic analysis). Para. [0032], discloses checking dependencies against a vulnerability database (i.e., software composition analysis)). Sumedrea is combinable with Seck, because both are f from the same field of automated vulnerability detection model in source code. It would have been obvious to a person having ordinary skill in the art before the effective filling date of the invention to train a model using modified corpus of source codes, as taught by Sumedrea with Seck’s method in order to train a model capable of identifying and preventing vulnerabilities in source codes to reduce false positives. As per claims 10 and 19, claims 10 and 19 encompass same or similar scope as claim 1. Therefore, claims 10 and 19 are rejected based on the reasons set forth above in rejecting claim 1. Regarding claim 4, Seck modified by Sumedrea teach the limitations of claim 1. Sumedrea teaches the limitations of claim 4 as follows: The system of claim 1, wherein the microprocessor readable and executable instructions further cause the microprocessor to: execute the trained Al algorithm to produce generated source code. (Sumedrea, Paras. [0016]-[0020], [0036]-[0048], and Fig. 4, method 400, describes executing the trained AI model (AIM) to identify vulnerabilities in source code, generate candidate refactored set, evaluate and compile them, and produce a final refactored (i.e., generated) source code). The same motivation to combine utilized in claim 1 is equally applicable in the instant claim. As per claim 13, claim 13 encompass same or similar scope as claim 4. Therefore, claim 13 is rejected based on the reasons set forth above in rejecting claim 4. Regarding claim 5, Seck modified by Sumedrea teach the limitations of claim 1. Sumedrea teaches the limitations of claim 5 as follows: The system of claim 4, wherein the microprocessor readable and executable instructions further cause the microprocessor to: scan the generated source code to identify one or more new vulnerabilities introduced into the generated source code; and mitigate the identified one or more new vulnerabilities introduced into the generated source code. (Sumedrea, Paras. [0016]-[0020], [0026]-[0029], [0035]-[0048], and Figs. 2-4, after the model generates/refactors source code, the vulnerability scanner 220 or security code analysis copilot AIM 190, evaluates the new generated code to detect possible vulnerabilities (a feedback loop). The system applies patches (i.e., mitigate), automates refactoring steps to mitigate/remove the identified new vulnerabilities (zero-day vulnerabilities)). The same motivation to combine utilized in claim 1 is equally applicable in the instant claim. As per claim 14, claim 14 encompass same or similar scope as claim 5. Therefore, claim 14 is rejected based on the reasons set forth above in rejecting claim 5. Regarding claim 8, Seck modified by Sumedrea teach the limitations of claim 1. Sumedrea teaches the limitations of claim 8 as follows: The system of claim 1, wherein the microprocessor readable and executable instructions further cause the microprocessor to: determine that the initial corpus of source code has changed; and in response to determining that the initial corpus of source code has changed: rescan the changed initial corpus of the source code using the test suite to identify one or more new potential vulnerabilities in the changed initial corpus of the source code; mitigate the identified one or more new potential vulnerabilities in the changed initial corpus of the source code to produce a second training corpus of source code; and retrain the Al algorithm using the second training corpus of source code. (Sumedrea, Paras. [0016]-[0020], [0026]-[0029], [0035]-[0048], and Figs. 2-4, after the model generates/refactors source code, the vulnerability scanner 220 or security code analysis copilot AIM 190, evaluates the new generated code to detect possible vulnerabilities (a feedback loop). The system applies patches (i.e., mitigate), automates refactoring steps to mitigate/remove the identified new vulnerabilities (zero-day vulnerabilities)). The same motivation to combine utilized in claim 1 is equally applicable in the instant claim. As per claim 17, claim 17 encompass same or similar scope as claim 8. Therefore, claim 17 is rejected based on the reasons set forth above in rejecting claim 8. Regarding claim 20, Seck modified by Sumedrea teach the limitations of claim 19. Sumedrea teaches the limitations of claim 20 as follows: The non-transient computer readable medium of claim 19, wherein the instructions further cause microprocessor to: execute the trained Al algorithm to produce generated source code; (Sumedrea, Paras. [0016]-[0020], [0036]-[0048], and Fig. 4, method 400, describes executing the trained AI model (AIM) to identify vulnerabilities in source code, generate candidate refactored set, evaluate and compile them, and produce a final refactored (i.e., generated) source code). scan the generated source code to identify one or more new vulnerabilities introduced into the generated source code; and mitigate the identified one or more new vulnerabilities introduced into the generated source code. (Sumedrea, Paras. [0016]-[0020], [0026]-[0029], [0035]-[0048], and Figs. 2-4, after the model generates/refactors source code, the vulnerability scanner 220 or security code analysis copilot AIM 190, evaluates the new generated code to detect possible vulnerabilities (a feedback loop). The system applies patches (i.e., mitigate), automates refactoring steps to mitigate/remove the identified new vulnerabilities). Sumedrea is combinable with Seck, because both are from the same field of automated vulnerability detection model in source code. It would have been obvious to a person having ordinary skill in the art before the effective filling date of the invention to train a model using modified corpus of source codes, as taught by Sumedrea with Seck’s method in order to train a model capable of identifying and preventing vulnerabilities in source codes to reduce false positives. Claims 3, 12 and 21 are rejected under 35 U.S.C. 103 over Seck (US 2025/0156531) in view of Sumedrea (US 2025/0139251), and further in view of Jackson (US 2017/0147338). Regarding claim 3, Seck modified by Sumedrea teach the limitations of claim 1. However, Jackson in the same field of endeavor teaches the limitations of claim 3 as follows: The system of claim 1, wherein the initial corpus of source code is filtered based on at least one of: one or more unanalyzed components and a quality of an individual component. (Jackson, Paras. [0036]-[0040], [0048]-[0053], [0059]-[0061], filtering the corpus of source code is based on individual component vulnerability analysis, filtering components based on risk/quality attribute (i.e., quality of an individual component). Para. [0056], “a repository manager can serve …….. as a convenient control point for ensuring the appropriate software components are being used”. Therefore, the components that are not analyzed are prevented from entering the repository (i.e., unanalyzed components)). Jackson is combinable with Seck-Sumedrea, because all are from the same field of automated vulnerability detection model in source code. It would have been obvious to a person having ordinary skill in the art before the effective filling date of the invention to filter out source code based on unanalyzed components, as taught by Jackson with Seck-Sumedrea’s method in order to identifies insecure code and components to improve the model. As per claims 12 and 21, claims 12 and 21 encompass same or similar scope as claim 3. Therefore, claims 12 and 21 are rejected based on the reasons set forth above in rejecting claim 3. Claims 6-7 and 15-16 are rejected under 35 U.S.C. 103 over Seck (US 2025/0156531) in view of Sumedrea (US 2025/0139251), and further in view of Sabetta (US 2015/0007138). Regarding claim 6, Seck modified by Sumedrea teach the limitations of claim 1. Sumedrea teaches the limitations of claim 6 as follows: The system of claim 1, wherein the microprocessor readable and executable instructions further cause the microprocessor to: rescan the first training corpus of source code [using the updated test suite] to identify one or more new potential vulnerabilities in the first training corpus of source code; mitigate the identified one or more new potential vulnerabilities in the first training corpus of source code to produce a second training corpus of source code; and retrain the Al algorithm using the second training corpus of source code. (Sumedrea, Paras. [0016]-[0020], [0026]-[0029], [0035]-[0048], and Figs. 2-4, after the model generates/refactors source code, the vulnerability scanner 220 or security code analysis copilot AIM 190, evaluates the new generated code to detect possible vulnerabilities (a feedback loop). The system applies patches (i.e., mitigate), automates refactoring steps to mitigate/remove the identified new vulnerabilities). Sumedrea teaches rescanning a new set of source code, mitigate the identified new potential vulnerabilities in the new set of source code, and retrain the model, but it does not explicitly disclose that the new set of source code to retrain the model is produced based on the change of test suite: determine that the test suite has been updated; and in response to determining that the test suite has been updated, However, Sabetta in the same field of endeavor teaches: The same motivation to combine utilized in claim 1 is equally applicable in the instant claim. determine that the test suite has been updated; and in response to determining that the test suite has been updated, (Sabetta, Paras. [0024]-[0025], [0037]-[0040], [0047]-[0048], the processor updates the test suite and executes it based on new data. these new data produced by the change of test suite can be fed to the rescanning and retraining model of Sumedrea). Sabetta is combinable with Seck-Sumedrea, because all are from the same field software testing and quality assurance. It would have been obvious to a person having ordinary skill in the art before the effective filling date of the invention to update a test suite in a software analysis system, as taught by Sabetta with Seck-Sumedrea’s method in order to ensure that the test coverage of a software meets the criteria, and improve the quality of evaluation of the software. As per claim 15, claim 15 encompass same or similar scope as claim 6. Therefore, claim 15 is rejected based on the reasons set forth above in rejecting claim 6. Regarding claim 7, Seck modified by Sumedrea teach the limitations of claim 1. Sabetta in the same field of endeavor teaches the limitations of claim 6 as follows: The system of claim 6, wherein determining that the test suite has been updated is based on a threshold of changes to the updated test suite. (Sabetta, Paras. [0024]-[0025], [0037]-[0040], [0047]-[0048], [0053]-[0058], the update of test suits happen when new test cases are added, or quality metrics change (i.e., threshold of changes)). The same motivation to combine utilized in claim 6 is equally applicable in the instant claim. As per claim 16, claim 16 encompass same or similar scope as claim 7. Therefore, claim 16 is rejected based on the reasons set forth above in rejecting claim 7. Claims 9, 18 and 22 are rejected under 35 U.S.C. 103 over Seck (US 2025/0156531) in view of Sumedrea (US 2025/0139251), and further in view of Williams (US 2015/0254555). Regarding claim 9, Seck modified by Sumedrea teach the limitations of claim 1. Williams in the same field of endeavor teaches the limitations of claim 9 as follows: The system of claim 1, wherein the one or more potential vulnerabilities comprise at least one false positive and wherein the microprocessor readable and executable instructions further cause the microprocessor to: provide feedback to the test suite about the at least one false positive. (Williams, Paras. [0173]-[0178], the system provides feedback on false positive vulnerabilities detected in code). Williams is combinable with Seck-Sumedrea, because all are from the same field software testing and quality assurance. It would have been obvious to a person having ordinary skill in the art before the effective filling date of the invention to provide feedback regarding false positive vulnerabilities detected in code, as taught by Williams with Seck-Sumedrea’s method in order to improve the quality of evaluation of the software. As per claims 18 and 22, claims 18 and 22 encompass same or similar scope as claim 9. Therefore, claims 18 and 22 are rejected based on the reasons set forth above in rejecting claim 9. References Considered But Not Relied Upon Golan (US 2022/0405397) teaches a system of static code analysis, detecting a potential security threat for the source code's application. Chan (US 2025/0036778) discloses a model with attention trained to recognize some of the cybersecurity vulnerabilities from the Common Vulnerabilities and Exposure (CVE) list and the Common Weakness Enumeration (CWE) list of software vulnerabilities and weaknesses. Conclusion Accordingly, claims 1, 3-10, and 12-22 are rejected. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEGAH BARZEGAR whose telephone number is (703)756-4755. The examiner can normally be reached M-F, 9:00 - 5:30. Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi T Arani can be reached on 571-272-3787. The fax phone number for the Application/Control Number: 17/470,067 Page 17 Art Unit: 2438 organization where this application or proceeding is assigned is 571-273- 8300. Application/Control Number: 17/386,076 Page 25 Art Unit: 2438 Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patentcenter for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /P.B./Examiner, Art Unit 2438 /TAGHI T ARANI/Supervisory Patent Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Oct 10, 2025
Non-Final Rejection — §103
Jan 12, 2026
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12534225
SATELLITE DISPENSING SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12441265
Mechanisms for moving a pod out of a vehicle
2y 5m to grant Granted Oct 14, 2025
Patent 12434638
VEHICLE INTERIOR PANEL WITH ONE OR MORE DAMPING PADS
2y 5m to grant Granted Oct 07, 2025
Patent 12372654
Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data
2y 5m to grant Granted Jul 29, 2025
Patent 12365469
AIRCRAFT PROPULSION SYSTEM WITH INTERMITTENT COMBUSTION ENGINE(S)
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
80%
With Interview (+13.6%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month