Prosecution Insights
Last updated: April 19, 2026
Application No. 18/215,981

Generative Artificial Intelligence for Source Code Security Vulnerability Inspection and Remediation

Non-Final OA §103
Filed
Jun 29, 2023
Examiner
CHOWDHURY, ZIAUL A.
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
473 granted / 544 resolved
+31.9% vs TC avg
Strong +37% interview lift
Without
With
+36.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
15 currently pending
Career history
559
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 544 resolved cases

Office Action

§103
DETAILED ACTION 1. (A). The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is based on Request filed for Continued Examination (RCE) Under 37 CFR 1.114 on 12/24/2025. Status of Claims 2. Claims 1, 9 and 15 have been amended. Claims 1-2, 4-10, 12-16 and 18-20 are pending in the application, of which claims 1 9 and 15 are in independent form and these claims (1-2, 4-10, 12-16 and 18-20) are subject to following rejection(s) and/or objection(s) indicated under section and subsections of No. 3 below. Response to the Amendments 3. Regarding art rejection: In regards to claims 1-20 Applicants arguments are not persuasive; further, Applicants' amendment necessitated new grounds of rejections presented in the following art rejection. Specification 4. The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: Claims 1 9 and 15 recite limitations i.e. "communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value"; however, the originally filled disclosure fails to provide adequate information regarding this claimed limitation “reaches a threshold severity score value” or somehow implies any act that the code vulnerability indicator reaching a threshold severity score value, and so called threshold severity score is communicated to a user associated with source code. The rule that a specification need not disclose what is well known in the art is "merely a rule of supplementation, not a substitute for a basic enabling disclosure." Genentech, 108 F.3d at 1366, 42 USPQ2d 1005; see also ALZA Corp., 603 F.3d at 940-41, 94 USPQ2d at 1827. Therefore, the specification must contain the information necessary to enable the novel aspects of the claimed invention. Id. at 941, 94 USPQ2d at 1827; Auto. Technologies, 501 F.3d at 1283-84, 84 USPQ2d at 1115 ("[T]he ‘omission of minor details does not cause a specification to fail to meet the enablement requirement. However, when there is no disclosure of any specific starting material or of any of the conditions under which a process can be carried out, undue experimentation is required.’") (quoting Genentech, 108 F.3d at 1366, 42 USPQ2d at 1005). In order to satisfy the enablement requirement, the specification need not contain an example if the invention is otherwise disclosed in such a manner that one skilled in the art will be able to practice it without an undue amount of experimentation. Also, although the specification need not teach what is well known in the art, applicant cannot rely on the knowledge of one skilled in the art to supply information that is required to enable the novel aspect of the claimed invention, when the enabling knowledge is in fact not known in the art. The Federal Circuit has stated that “[i]t is the specification, not the knowledge of one skilled in the art, that must supply the novel aspects of an invention in order to constitute adequate enablement” Auto. Technologies, 501 F.3d at 1283, 84 USPQ2d at 1115 (quoting Genentech, Inc. v. Novo Nordisk A/S, 108 F.3d 1361, 1366, 42 USPQ2d 1001, 1005 (Fed. Cir. 1997)). Therefore, the applicant is suggested to make appropriate amendment to the disclosure to present clear support or antecedent basis for the terms appeared in the said claim; however, no new matter should be introduced. Claim Rejections – 35 USC §103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1, 4, 6, 8-9, 13, 15, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Monika Sahu (US Patent Application Publication No. 2024/0330455 A1 -herein after Sahu) in view of Hay et al. (US Patent Application Publication No. 2018/0121657 A1 herein after Hay). Per claim 1: Sahu discloses: A computer system for security vulnerability inspection for a source code to be used in an application (Abstract: methods, apparatus, systems, computing devices, computing entities, and/or the like for detecting and locating vulnerabilities in source code), the computer system comprising: one or more processors; a memory storing executable instructions thereon that, when executed by the one or more processors (At least see ¶[0004] -computing apparatus comprising memory and one or more processors communicatively coupled to the memory), cause the one or more processors to: send the source code and a prompt for code checking to a machine learning (ML) chatbot to cause an ML model to inspect the source code for security vulnerabilities using test cases for the security vulnerabilities (At least see ¶[0004] -receive one or more source code files; for each of the one or more source code files … comprises one or more program statements associated with one or more vulnerabilities; generate, using a predictive machine learning model, a vulnerability prediction for each of the one or more source code files), wherein test cases are generated by the ML model based on data collected from one or more past source code inspections (At least see ¶[0059] - term “training source code file” may refer to a computer resource comprising source code associated with test case information that is stored within a given data format. A training source code file may be used to create a training dataset used to train a predictive machine learning model), determine that there is a security vulnerability in the source code based on a response from the ML chatbot (At least see ¶[0004] -a vulnerability class associated with each location of vulnerable code, wherein: (i) the predictive machine learning model is trained based on a training dataset), and responsive to determining that there is a security vulnerability in the source code (At least see ¶[0004] -one or more locations of vulnerable code in the source code based on the matching, and (b) a vulnerability class associated with each location of vulnerable code): communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value (At least see ¶[0028] -a prediction-based action that can be performed using the predictive data analysis system 101 comprises receiving a request for detecting vulnerabilities in a source code file and displaying a vulnerability prediction for the source code file on a user interface). Sahu sufficiently discloses the system as set forth above, but Sahu does not explicitly disclose: determine a severity score for the security vulnerability, and communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value. However, Hay discloses: determine a severity score for the security vulnerability (At least see FIG. 2 with associated text in ¶[0022]), and communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value (At least see FIG. 4[Wingdings font/0xE0]406[Wingdings font/0xE0]408 with associated text, i.e. ¶[0012] static analysis can calculate risk/severity score for the security vulnerability, and determine a risk score above a risk threshold value). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Hay into Sahu’s invention because automated algorithms for detection of application-level vulnerabilities have thus far been highly specialized and limited to code and configuration files while failing to analyze other aspects of software applications corresponding to networks, storage layers, cryptographic protocols; as such, Hay’s teaching can provide static analysis to scan source code for an application to determine if there is flow of information out of a first database, through an application, and into a sensitive endpoint such as a second database, file system, or a web service, among others. If the incoming data from the first database is neither sanitized nor validated, then the techniques described herein can determine whether access to the first database is restricted to trusted parties and whether data inserted by these parties into the first database is first sanitized or validated (please see ¶[0011] and ¶[0013]). Per claim 4: Sahu discloses: test cases are generated based on security vulnerability announcements in text format, common weakness enumeration format, and/or common vulnerability exposures format (At least see ¶[0080] - one or more vulnerability databases may include a systematic set of test case information comprising test programs in various programming languages associated with a plurality of vulnerability classes, and ¶[0081] - a training source code file describes a computer resource comprising source code associated with test case information that is stored within a given data format). Per claim 6: Sahu discloses: train the ML model with a training dataset, and validate the ML model with a validation dataset, wherein the training dataset and the validation dataset comprise documents describing one or more security vulnerabilities and example source code lacking the one or more security vulnerabilities (At least see ¶[0110] -determining one or more potential vulnerability candidates by performing static analysis on one or more program statements associated with the one or more training source code files and matching the one or more program statements associated with the one or more training source code files with the one or more syntax features). Per claim 8: Sahu discloses: application is an insurance application (At least see ¶[0052] - a source code file may be used to generate a software application or parts of a software application configured to perform one or more functions). Per claim 9: Sahu discloses: A computer-implemented method for inspecting source code to be used in an application for security vulnerabilities (Abstract: methods, apparatus, systems, computing devices, computing entities, and/or the like for detecting and locating vulnerabilities in source code), the method comprising: sending the source code and a prompt for code checking to a machine learning (ML) chatbot to cause an ML model to inspect the source code for security vulnerabilities using test cases for the security vulnerabilities (At least see ¶[0004] -receive one or more source code files; for each of the one or more source code files … comprises one or more program statements associated with one or more vulnerabilities; generate, using a predictive machine learning model, a vulnerability prediction for each of the one or more source code files), wherein test cases are generated, by the ML model, based on data collected from one or more past source code inspections (At least see ¶[0059] - term “training source code file” may refer to a computer resource comprising source code associated with test case information that is stored within a given data format. A training source code file may be used to create a training dataset used to train a predictive machine learning model); determine that there is a security vulnerability in the source code based on a response from the ML chatbot (At least see ¶[0004] -a vulnerability class associated with each location of vulnerable code, wherein: (i) the predictive machine learning model is trained based on a training dataset)t; and responsive to determining that there is a security vulnerability in the source code (At least see ¶[0004] -one or more locations of vulnerable code in the source code based on the matching, and (b) a vulnerability class associated with each location of vulnerable code): communicating the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value (At least see ¶[0028] -a prediction-based action that can be performed using the predictive data analysis system 101 comprises receiving a request for detecting vulnerabilities in a source code file and displaying a vulnerability prediction for the source code file on a user interface). Sahu sufficiently discloses the system as set forth above, but Sahu does not explicitly disclose: determine a severity score for the security vulnerability, and communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value. However, Hay discloses: determining a severity score for the security vulnerability (At least see FIG. 2 with associated text in ¶[0022]), and communicating the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value (At least see FIG. 4[Wingdings font/0xE0]406[Wingdings font/0xE0]408 with associated text, i.e. ¶[0012] static analysis can calculate risk/severity score for the security vulnerability, and determine a risk score above a risk threshold value). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Hay into Sahu’s invention because automated algorithms for detection of application-level vulnerabilities have thus far been highly specialized and limited to code and configuration files while failing to analyze other aspects of software applications corresponding to networks, storage layers, cryptographic protocols; as such, Hay’s teaching can provide static analysis to scan source code for an application to determine if there is flow of information out of a first database, through an application, and into a sensitive endpoint such as a second database, file system, or a web service, among others. If the incoming data from the first database is neither sanitized nor validated, then the techniques described herein can determine whether access to the first database is restricted to trusted parties and whether data inserted by these parties into the first database is first sanitized or validated (please see ¶[0011] and ¶[0013]). Per claim 13: Sahu discloses: training the ML model with a training dataset; and validating the ML model with a validation dataset, wherein the training dataset and the validation dataset comprise documents describing one or more security vulnerabilities and example source code lacking the one or more security vulnerabilities (At least see ¶[0110] -determining one or more potential vulnerability candidates by performing static analysis on one or more program statements associated with the one or more training source code files and matching the one or more program statements associated with the one or more training source code files with the one or more syntax features). Per claim 15: Sahu discloses: A computer readable storage medium storing non-transitory computer readable instructions for generating customized code for inspecting source code to be used in an application for security vulnerabilities (Abstract: methods, apparatus, systems, computing devices, computing entities, and/or the like for detecting and locating vulnerabilities in source code, and see ¶[0004] -computing apparatus comprising memory and one or more processors communicatively coupled to the memory) wherein the instructions when executed on one or more processors cause the one or more processors to: send the source code and a prompt for code checking to a machine learning (ML) chatbot to cause an ML model to inspect the source code for security vulnerabilities using test cases for the security vulnerabilities (At least see ¶[0004] -receive one or more source code files; for each of the one or more source code files … comprises one or more program statements associated with one or more vulnerabilities; generate, using a predictive machine learning model, a vulnerability prediction for each of the one or more source code files), wherein test cases are generated, by the ML model, based on data collected from one or more past source code inspections (At least see ¶[0059] - term “training source code file” may refer to a computer resource comprising source code associated with test case information that is stored within a given data format. A training source code file may be used to create a training dataset used to train a predictive machine learning model), determine that there is a security vulnerability in the source code based on a response from the ML chatbot (At least see ¶[0004] -a vulnerability class associated with each location of vulnerable code, wherein: (i) the predictive machine learning model is trained based on a training dataset), and responsive to determining that there is a security vulnerability in the source code (At least see ¶[0004] -one or more locations of vulnerable code in the source code based on the matching, and (b) a vulnerability class associated with each location of vulnerable code): communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value (At least see ¶[0028] -a prediction-based action that can be performed using the predictive data analysis system 101 comprises receiving a request for detecting vulnerabilities in a source code file and displaying a vulnerability prediction for the source code file on a user interface). Sahu sufficiently discloses the system as set forth above, but Sahu does not explicitly disclose: determine a severity score for the security vulnerability, and communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value. However, Hay discloses: determine a severity score for the security vulnerability (At least see FIG. 2 with associated text in ¶[0022]), and communicate the security vulnerability and the severity score to a user associated with the source code when the severity score for the security vulnerability reaches a threshold severity score value (At least see FIG. 4[Wingdings font/0xE0]406[Wingdings font/0xE0]408 with associated text, i.e. ¶[0012] static analysis can calculate risk/severity score for the security vulnerability, and determine a risk score above a risk threshold value). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Hay into Sahu’s invention because automated algorithms for detection of application-level vulnerabilities have thus far been highly specialized and limited to code and configuration files while failing to analyze other aspects of software applications corresponding to networks, storage layers, cryptographic protocols; as such, Hay’s teaching can provide static analysis to scan source code for an application to determine if there is flow of information out of a first database, through an application, and into a sensitive endpoint such as a second database, file system, or a web service, among others. If the incoming data from the first database is neither sanitized nor validated, then the techniques described herein can determine whether access to the first database is restricted to trusted parties and whether data inserted by these parties into the first database is first sanitized or validated (please see ¶[0011] and ¶[0013]). Per claim 19: Sahu discloses: train the ML model with a training dataset, and validate the ML model with a validation dataset, wherein the training dataset and the validation dataset comprise documents describing one or more security vulnerabilities and example source code lacking the one or more security vulnerabilities (At least see ¶[0110] -determining one or more potential vulnerability candidates by performing static analysis on one or more program statements associated with the one or more training source code files and matching the one or more program statements associated with the one or more training source code files with the one or more syntax features). 7. Claims 2, 5, 10, 12, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Monika Sahu (US Patent Application Publication No. 2024/0330455 A1 -herein after Sahu) in view of Cabrera Lozoya et al. (US Patent Application Publication No. 20220129261 A1 herein after Cabrera Lozoya). Per claim 2: Sahu modified by Hay sufficiently discloses the system as set forth above, but Sahu modified by Hay does not explicitly disclose: determine that there is a fix for the security vulnerability, and generate corrected source code that comprises the fix, and wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: communicate the fix to the user, and receive the corrected source code from the ML chatbot. However, Cabrera Lozoya discloses: determine that there is a fix for the security vulnerability (At least see ¶[0024] -determine the localization of fixes to the source code input to the ML task), and generate corrected source code that comprises the fix (At least see ¶[0014] -all the way from small edits and improvements to the introduction of new features and important fixes of bugs and potential security vulnerabilities), and wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: communicate the fix to the use (At least see ¶[0062] Type: bug, task, question, improvement, written explanation of the ticket in plain natural language. ¶[0065] Discussion: A discussion of users about this issue (a series of comments)), and receive the corrected source code from the ML chatbot (At least see ¶[0070] Source code commits can be related to tickets by including a ticket identifier in the commit message. Thus, by crawling source code repositories, it is possible to construct a dataset of commits that are mapped onto the corresponding bug handling support tickets. A commit linked to a ticket is considered as the fix of the issue described in the ticket). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cabrera Lozoya into Sahu modified by Hay because Cabrera Lozoya teaching would generate distributed vector representations of commits of source code to a repository, and source code commit representations can be part of a data corpus referenced by a machine learning (ML) process to perform tasks such as detecting specific source code changes (e.g., that introduce new features, fix bugs, or eliminate security vulnerabilities, wherein a source code commit comprising source code, time information, and an associated label, is received; as such, it is advances and possible due to the explosion of freely available data—on the order of millions or even hundreds of millions of samples using such Deep learning techniques (please see ¶[0001] through ¶[0004]). Per claim 5: Cabrera Lozoya also discloses: replace the source code in the application with the corrected source code to generate a new version of the application (At least see ¶[0057] -actual task is to generate a commit representation useful in detecting source code commits that fix security vulnerabilities in a software application). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cabrera Lozoya into Sahu modified by Hay because Cabrera Lozoya teaching would generate distributed vector representations of commits of source code to a repository, and source code commit representations can be part of a data corpus referenced by a machine learning (ML) process to perform tasks such as detecting specific source code changes (e.g., that introduce new features, fix bugs, or eliminate security vulnerabilities, wherein a source code commit comprising source code, time information, and an associated label, is received; as such, it is advances and possible due to the explosion of freely available data—on the order of millions or even hundreds of millions of samples using such Deep learning techniques (please see ¶[0001] through ¶[0004]). Per claim 10: Sahu modified by Hay sufficiently discloses the system as set forth above, but Sahu modified by Hay does not explicitly disclose: determine that there is a fix for the security vulnerability, and generate corrected source code that comprises the fix, and wherein the method further comprises: communicating the fix to the user, and receiving the corrected source code from the ML chatbot. However, Cabrera Lozoya discloses: determine that there is a fix for the security vulnerability (At least see ¶[0024] -determine the localization of fixes to the source code input to the ML task), and generate corrected source code that comprises the fix (At least see ¶[0014] -all the way from small edits and improvements to the introduction of new features and important fixes of bugs and potential security vulnerabilities), and wherein the method further comprises: communicating the fix to the use (At least see ¶[0062] Type: bug, task, question, improvement, written explanation of the ticket in plain natural language. ¶[0065] Discussion: A discussion of users about this issue (a series of comments)), and receiving the corrected source code from the ML chatbot (At least see ¶[0070] Source code commits can be related to tickets by including a ticket identifier in the commit message. Thus, by crawling source code repositories, it is possible to construct a dataset of commits that are mapped onto the corresponding bug handling support tickets. A commit linked to a ticket is considered as the fix of the issue described in the ticket). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cabrera Lozoya into Sahu modified by Hay because Cabrera Lozoya teaching would generate distributed vector representations of commits of source code to a repository, and source code commit representations can be part of a data corpus referenced by a machine learning (ML) process to perform tasks such as detecting specific source code changes (e.g., that introduce new features, fix bugs, or eliminate security vulnerabilities, wherein a source code commit comprising source code, time information, and an associated label, is received; as such, it is advances and possible due to the explosion of freely available data—on the order of millions or even hundreds of millions of samples using such Deep learning techniques (please see ¶[0001] through ¶[0004]). Per claim 12: Cabrera Lozoya also discloses: replacing the source code in the application with the corrected source code to generate a new version of the application (At least see ¶[0057] -actual task is to generate a commit representation useful in detecting source code commits that fix security vulnerabilities in a software application). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cabrera Lozoya into Sahu because Cabrera Lozoya teaching would generate distributed vector representations of commits of source code to a repository, and source code commit representations can be part of a data corpus referenced by a machine learning (ML) process to perform tasks such as detecting specific source code changes (e.g., that introduce new features, fix bugs, or eliminate security vulnerabilities, wherein a source code commit comprising source code, time information, and an associated label, is received; as such, it is advances and possible due to the explosion of freely available data—on the order of millions or even hundreds of millions of samples using such Deep learning techniques (please see ¶[0001] through ¶[0004]). Per claim 16: Sahu modified by Hay sufficiently discloses the system as set forth above, but Sahu modified by Hay does not explicitly disclose: determine that there is a fix for the security vulnerability, and generate corrected source code that comprises the fix, and wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: communicate the fix to the user, and receive the corrected source code from the ML chatbot. However, Cabrera Lozoya discloses: determine that there is a fix for the security vulnerability (At least see ¶[0024] -determine the localization of fixes to the source code input to the ML task), and generate corrected source code that comprises the fix (At least see ¶[0014] -all the way from small edits and improvements to the introduction of new features and important fixes of bugs and potential security vulnerabilities), and wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: communicate the fix to the use (At least see ¶[0062] Type: bug, task, question, improvement, written explanation of the ticket in plain natural language. ¶[0065] Discussion: A discussion of users about this issue (a series of comments)), and receive the corrected source code from the ML chatbot (At least see ¶[0070] Source code commits can be related to tickets by including a ticket identifier in the commit message. Thus, by crawling source code repositories, it is possible to construct a dataset of commits that are mapped onto the corresponding bug handling support tickets. A commit linked to a ticket is considered as the fix of the issue described in the ticket). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cabrera Lozoya into Sahu modified by Hay because Cabrera Lozoya teaching would generate distributed vector representations of commits of source code to a repository, and source code commit representations can be part of a data corpus referenced by a machine learning (ML) process to perform tasks such as detecting specific source code changes (e.g., that introduce new features, fix bugs, or eliminate security vulnerabilities, wherein a source code commit comprising source code, time information, and an associated label, is received; as such, it is advances and possible due to the explosion of freely available data—on the order of millions or even hundreds of millions of samples using such Deep learning techniques (please see ¶[0001] through ¶[0004]). Per claim 18: Cabrera Lozoya also discloses: replace the source code in the application with the corrected source code to generate a new version of the application (At least see ¶[0057] -actual task is to generate a commit representation useful in detecting source code commits that fix security vulnerabilities in a software application). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cabrera Lozoya into Sahu modified by Hay because Cabrera Lozoya teaching would generate distributed vector representations of commits of source code to a repository, and source code commit representations can be part of a data corpus referenced by a machine learning (ML) process to perform tasks such as detecting specific source code changes (e.g., that introduce new features, fix bugs, or eliminate security vulnerabilities, wherein a source code commit comprising source code, time information, and an associated label, is received; as such, it is advances and possible due to the explosion of freely available data—on the order of millions or even hundreds of millions of samples using such Deep learning techniques (please see ¶[0001] through ¶[0004]). 8. Claims 7, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Monika Sahu (US Patent Application Publication No. 2024/0330455 A1 -herein after Sahu) in view of Wasiq et al. (US Patent Publication No. 10,409,995 B1 A1 herein after Wasiq). Per claim 7: Sahu modified by Hay sufficiently discloses the system as set forth above, but Sahu modified by Hay does not disclose: the security vulnerabilities comprise one or more of SQL injection, LDAP injection, buffer overflows, stack overflows, and cross-site scripting. However, Wasiq discloses: security vulnerabilities comprise one or more of SQL injection, LDAP injection, buffer overflows, stack overflows, and cross-site scripting (At least see Col. 5:56-60 - security review 112 may scan source code of the services for vulnerabilities to structured query language (SQL) injection (e.g., un-trusted input), cross-site scripting (XSS), input/data validation, authentication, authorization, sensitive data, code access security). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wasiq into Sahu modified by Hay because Wasiq’s teaching would provide security review that may be an examination of source code of one or more software programs, and the security review may be intended to identify mistakes introduced into a software application that could render the software application vulnerable to a computer security breach and/or defective execution so that the mistakes can be fixed or any damage caused by the mistakes can be mitigated (see Col. 5:46-52). Per claim 14: Sahu modified by Hay sufficiently discloses the system as set forth above, but Sahu modified by Hay does not disclose: the security vulnerabilities comprise one or more of SQL injection, LDAP injection, buffer overflows, stack overflows, and cross-site scripting. However, Wasiq discloses: security vulnerabilities comprise one or more of SQL injection, LDAP injection, buffer overflows, stack overflows, and cross-site scripting (At least see Col. 5:56-60 - security review 112 may scan source code of the services for vulnerabilities to structured query language (SQL) injection (e.g., un-trusted input), cross-site scripting (XSS), input/data validation, authentication, authorization, sensitive data, code access security). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wasiq into Sahu modified by Hay because Wasiq’s teaching would provide security review that may be an examination of source code of one or more software programs, and the security review may be intended to identify mistakes introduced into a software application that could render the software application vulnerable to a computer security breach and/or defective execution so that the mistakes can be fixed or any damage caused by the mistakes can be mitigated (see Col. 5:46-52). Per claim 20: Sahu modified by Hay sufficiently discloses the system as set forth above, but Sahu modified by Hay does not disclose: the security vulnerabilities comprise one or more of SQL injection, LDAP injection, buffer overflows, stack overflows, and cross-site scripting. However, Wasiq discloses: security vulnerabilities comprise one or more of SQL injection, LDAP injection, buffer overflows, stack overflows, and cross-site scripting (At least see Col. 5:56-60 - security review 112 may scan source code of the services for vulnerabilities to structured query language (SQL) injection (e.g., un-trusted input), cross-site scripting (XSS), input/data validation, authentication, authorization, sensitive data, code access security). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wasiq into Sahu modified by Hay because Wasiq’s teaching would provide security review that may be an examination of source code of one or more software programs, and the security review may be intended to identify mistakes introduced into a software application that could render the software application vulnerable to a computer security breach and/or defective execution so that the mistakes can be fixed or any damage caused by the mistakes can be mitigated (see Col. 5:46-52). CONCLUSION 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZIAUL A. CHOWDHURY whose telephone number is (571)270-7750. The examiner can normally be reached on 9:30PM 6:30PM Monday -Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached on 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Status information for published applications may be obtained from Patent Public Search tool (for all users) – A link to the Patent Public Search Tool is available at www. Uspto.gov/PatentPublicSearch. To find a U.S. patent or U.S. patent application publication, open the Patent Public Search tool by selecting “Start search”. Type the U.S. patent or U.S. patent application publication number in the “Search” panel without any punctuation and followed by an”.pn.”. Should you have questions on access to the system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZIAUL A CHOWDHURY/ Primary Examiner, Art Unit 2192 02/06/2025
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Apr 02, 2025
Non-Final Rejection — §103
Jul 22, 2025
Response Filed
Sep 24, 2025
Final Rejection — §103
Dec 04, 2025
Interview Requested
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary
Dec 24, 2025
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602312
CONFIGURABLE IDENTIFICATION MECHANISM OF DEBUG PARAMETERS IN MULTI-PROCESS OR MULTI-THREADED DEBUGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12602204
DEVELOPING A SOFTWARE PRODUCT IN A NO-CODE DEVELOPMENT PLATFORM TO ADDRESS A PROBLEM RELATED TO A BUSINESS DOMAIN
2y 5m to grant Granted Apr 14, 2026
Patent 12596344
CONTROL SYSTEM, CONTROL PROGRAM TRANSMISSION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12591427
PLC-BASED SUPPORT FOR ZERO-DOWNTIME UPGRADES OF CONTROL FUNCTIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12578956
Method and apparatus for firmware patching
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+36.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 544 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month