Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,582

SECURE PROMPT RECOMMENDATION USING REVERSE ENGINEERING SECURITY PROMPTS

Final Rejection §103
Filed
Jun 19, 2023
Examiner
WANG, CHAO
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
114 granted / 143 resolved
+21.7% vs TC avg
Strong +86% interview lift
Without
With
+85.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
166
Total Applications
across all art units

Statute-Specific Performance

§101
15.1%
-24.9% vs TC avg
§103
68.7%
+28.7% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to the Amendment filed on 10/16/2025. In the instant Amendment, claims 1-3, 6-11, 13, 16, and 18-20 have been amended. Claims 4-5 and 17 have been cancelled without prejudice. Claims 1, 9, and 16 are independent claims. Claims 1-3, 6-16, and 18-20 have been examined and are pending. This Action is made FINAL. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/23/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments The rejections of claims 1-3, 6-16, and 18-20 under 35 U.S.C. § 101 are withdrawn as the claims have been amended. Applicants' arguments with respect to claims 1-3, 6-16, and 18-20 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, 8-12, 16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ganz et al. (“Ganz,” US 20240184892, filed on 12/12/2022) in view of Dinh et al. (“Dinh,” US 20210124830, published on 04/29/2021). Regarding Claim 1; Ganz discloses a method comprising: receiving a potentially vulnerable prompt that is estimated to cause a source code generation machine learning model to generate a security vulnerability (par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [i.e., according specification par 0031; potentially vulnerable prompts 340 are provided as input to code generation model] and outputting a probability that each of a set of vulnerabilities exists in the source code; par 0019; the machine-learning models are trained using a training set. Each element of the training set is an input for the machine learning model (e.g., an input data object). By processing the training set, the internal variables of the machine learning model are adjusted so that the error rate of the machine learning model is minimized); generating, a potentially vulnerable piece of source code by providing the potentially vulnerable prompt to the source code generation machine learning model (par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified); receiving a confirmation that the potentially vulnerable piece of source code includes the security vulnerability (par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted); generating a secure prompt (par 0074; based on the empirical measurement and the source code, the testing server trains the machine-learning model. For example, a vector indicating which vulnerabilities were determined to exist in operation provided as the label for the source code. The labeled source code used to train the machine-learning model). Ganz discloses generating a secure prompt as recited above, but do not explicitly disclose generating a secure prompt that generates a secure source code that performs a same functionality as the potentially vulnerable piece of source code, wherein the generated secure source code does not include the security vulnerability; and storing an association between the potentially vulnerable prompt and the secure prompt in a prompt store, wherein a user-supplied prompt is matched to the potentially vulnerable prompt in the prompt store and replaced at least in part with the secure prompt, wherein providing the secure prompt to an individual source code generation machine learning model generates an individual secure source code that is incorporated into a source code file. However, in an analogous art, Dinh discloses code vulnerability remediation system/method that includes: generating a secure prompt that generates a secure source code that performs a same functionality as the potentially vulnerable piece of source code, wherein the generated secure source code does not include the security vulnerability (Dinh: par 0073; identifying, based on the comparing, a code fragment of the plurality of code fragments matching at least the portion of the code comprising the at least one vulnerability. The method includes executing a solution of the plurality of solutions corresponding to the identified code fragment to prevent the at least one vulnerability in at least the portion of the code. Executing the solution include generating new code without the at least one vulnerability. One or more machine learning algorithms may be applied to update the knowledge base with data corresponding to the generation of the new code. The method can also include determining a programming language of the code, and sanitizing at least the portion of the code comprising the at least one vulnerability. Sanitizing at least the portion of the code comprising the at least one vulnerability can include removing one or more comments from at least the portion of the code comprising the at least one vulnerability); and storing an association between the potentially vulnerable prompt and the secure prompt in a prompt store, wherein a user-supplied prompt is matched to the potentially vulnerable prompt in the prompt store and replaced at least in part with the secure prompt, wherein providing the secure prompt to an individual source code generation machine learning model generates an individual secure source code that is incorporated into a source code file (Dinh: par 0030; the code corpus, is configured to store and record information relating to known vulnerabilities and actions taken to resolve the vulnerabilities; par 0038; receives programming code (e.g., source code) from one or more user devices [] determine whether the code includes vulnerabilities; par 0051; the code repair/building component of the platform uses the solution associated the matching code vector representation from the code corpus to remove the vulnerability from the inputted code. The matching code vector representation in the code corpus will have a solution code snippet associated therewith that eliminates the code vulnerability. The code repair/building component uses this associated solution to modify the incoming code to remove the vulnerability and build new code that does not include the vulnerability; par 0052; if the building of the new code is successful, a CD pipeline is executed where a final version of the code is reviewed, staged and produced). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Dinh with the method/system of Ganz to include generating a secure prompt that generates a secure source code that performs a same functionality as the potentially vulnerable piece of source code, wherein the generated secure source code does not include the security vulnerability; and storing an association between the potentially vulnerable prompt and the secure prompt in a prompt store, wherein a user-supplied prompt is matched to the potentially vulnerable prompt in the prompt store and replaced at least in part with the secure prompt, wherein providing the secure prompt to an individual source code generation machine learning model generates an individual secure source code that is incorporated into a source code file. . One would have been motivated to determine whether at least a portion of the code comprises at least one vulnerability, and comparing at least the portion of the code comprising the at least one vulnerability to a knowledge base (Dinh: abstract). Regarding Claim 2; The combination of Ganz and Dinh disclose the method of claim 1, Ganz discloses wherein the potentially vulnerable prompt comprises a plurality of potentially vulnerable prompts, wherein generating the potentially vulnerable piece of source code comprises generating a plurality of potentially vulnerable pieces of source code from the plurality of potentially vulnerable prompts (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified), and wherein the confirmation that the potentially vulnerable piece of source code includes the security vulnerability comprises identifying combinations of the plurality of potentially vulnerable prompts and the plurality of potentially vulnerable pieces of source code that cause the source code generation machine learning model to generate the security vulnerability (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted). Regarding Claim 3; The combination of Ganz and Dinh disclose the method of claim 2, Ganz discloses wherein generating the secure prompt comprises generating secure prompts for the combinations of the plurality of potentially vulnerable prompts and the plurality of potentially vulnerable pieces of source code (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified). Regarding Claim 6; The combination of Ganz and Dinh disclose the method of claim 1, wherein the potentially vulnerable prompt is generated by providing the source code generation machine learning model with a piece of source code that includes the security vulnerability, pieces of vulnerable source code, and prompts that caused the source code generation machine learning model to generate the pieces of vulnerable source code (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted). Regarding Claim 8; The combination of Ganz and Dinh disclose the method of claim 1, Ganz discloses wherein the potentially vulnerable piece of source code is confirmed to include the security vulnerability based on an analysis from a security analyzer (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted); Regarding Claim 9; This Claim recites a system that perform the same steps as method of Claim 1, and has limitations that are similar to Claim 1, thus are rejected with the same rationale applied against claim 1. Regarding Claim 10; The combination of Ganz and Dinh disclose the system of claim 9, Ganz discloses wherein the reflection technique provides the code generation machine learning model with the potentially vulnerable prompt and a prompt to identify security vulnerabilities that would be generated by the potentially vulnerable prompt (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted); Regarding Claim 11; The combination of Ganz and Dinh disclose the system of claim 10, Dinh discloses wherein generating the secure prompt comprises providing the code generation machine learning model with a prompt to fix the potentially vulnerable prompt (Dinh: par 0073; identifying, based on the comparing, a code fragment of the plurality of code fragments matching at least the portion of the code comprising the at least one vulnerability. The method includes executing a solution of the plurality of solutions corresponding to the identified code fragment to prevent the at least one vulnerability in at least the portion of the code. Executing the solution include generating new code without the at least one vulnerability. One or more machine learning algorithms may be applied to update the knowledge base with data corresponding to the generation of the new code. The method can also include determining a programming language of the code, and sanitizing at least the portion of the code comprising the at least one vulnerability. Sanitizing at least the portion of the code comprising the at least one vulnerability can include removing one or more comments from at least the portion of the code comprising the at least one vulnerability). The motivation is the same that of claim 9 above. Regarding Claim 12; The combination of Ganz and Dinh disclose the system of claim 11, Dinh discloses wherein fixing the potentially vulnerable prompt comprises creating a prompt that generates a piece of source code without the security vulnerability that performs a same functionality as a piece of source code generated by the potentially vulnerable prompt (Dinh: par 0073; identifying, based on the comparing, a code fragment of the plurality of code fragments matching at least the portion of the code comprising the at least one vulnerability. The method includes executing a solution of the plurality of solutions corresponding to the identified code fragment to prevent the at least one vulnerability in at least the portion of the code. Executing the solution include generating new code without the at least one vulnerability. One or more machine learning algorithms may be applied to update the knowledge base with data corresponding to the generation of the new code. The method can also include determining a programming language of the code, and sanitizing at least the portion of the code comprising the at least one vulnerability. Sanitizing at least the portion of the code comprising the at least one vulnerability can include removing one or more comments from at least the portion of the code comprising the at least one vulnerability; 0052; if the building of the new code is successful, a CD pipeline is executed where a final version of the code is reviewed, staged and produced). The motivation is the same that of claim 9 above. Regarding Claim 16; This Claim recites a computer-readable storage medium that perform the same steps as method of Claim 1, and has limitations that are similar to Claim 1, thus are rejected with the same rationale applied against claim 1. Regarding Claim 18; The combination of Ganz and Dinh disclose the computer-readable storage medium of claim 16, Ganz discloses wherein the user-supplied prompt comprises a portion of the source code file before a location in the source code file where the generated individual secure source code is inserted (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted). Regarding Claim 19; The combination of Ganz and Dinh disclose the computer-readable storage medium of claim 16, Ganz discloses wherein the user-supplied prompt matches the potentially vulnerable prompt based on a string comparison, a string distance algorithm, or a distance comparison of embeddings of the user- supplied prompt and the potentially vulnerable prompt generated by a machine learning model (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted; par 0081; wherein the empirical identification of the subset of the target locations is based on a mean crash distance between the one or more target locations and locations where the source code crashes). Regarding Claim 20; The combination of Ganz and Dinh disclose the computer-readable storage medium of claim 16, Ganz discloses wherein the confirmation that the potentially vulnerable piece of source code includes the security vulnerability is based on a determination that a combination of the potentially vulnerable piece of source code and the potentially vulnerable prompt includes the security vulnerability (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted), and wherein the potentially vulnerable prompt is combined with the potentially vulnerable piece of source code by appending the potentially vulnerable piece of source code to the potentially vulnerable prompt (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified); Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ganz et al. (US 20240184892) in view of Dinh et al. (US 20210124830), and further in view of ZHANG et al. (“ZHANG,” CN 111722998 B, filed on 03/21/2019). Regarding Claim 7; The combination of Ganz and Dinh disclose the method of claim 6, Ganz discloses wherein the potentially vulnerable prompt is generated by providing the source code generation machine learning model with pieces of vulnerable source code and corresponding prompts followed by the piece of source code that includes the security vulnerability (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0019; the machine-learning models are trained using a training set. Each element of the training set is an input for the machine learning model (e.g., an input data object). By processing the training set, the internal variables of the machine learning model are adjusted so that the error rate of the machine learning model is minimized). The combination of Ganz and Dinh disclose wherein the potentially vulnerable prompt is generated as recited above, but do not explicitly disclose pairs of pieces of vulnerable code. However, in an analogous art, ZHANG discloses code quality control system/method that includes: pairs of pieces of vulnerable code (ZHANG: page 8, par 2; pair of defect codes and repair codes in the change codes, and when there are multiple modified code segments). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of ZHANG with the method/system of Ganz and Dinh to include pairs of pieces of vulnerable code. One would have been motivated to match result of the code to be evaluated and the code defect template, providing the code repairing template corresponding to the code defect template, so as to repair the code to be evaluated (ZHANG: abstract). Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ganz et al. (US 20240184892) in view of Dinh et al. (US 20210124830), and further in view of Kolychev et al. (“Kolychev,” US 20190377880, published on 12/12/2019). Regarding Claim 13; The combination of Ganz and Dinh disclose the system of claim 9, Dinh disclose discloses wherein the source code generation machine learning model comprises a model trained on a corpus source code training data (Dinh: par 0033; the code corpus is a dynamic knowledge base which is continuously updated and modified based on input from an AI/ML engine which, as described further herein below, uses AI/ML techniques to learn code vulnerabilities not previously in the code corpus and the corresponding solutions to remove the code vulnerabilities). The combination of Ganz and Dinh disclose wherein the source code generation machine learning model comprises a model trained on a corpus source code training data as recited above, but do not explicitly disclose a multimodal model. However, in an analogous art, Kolychev discloses machine learning system/method that includes: a multimodal mode (Kolychev: par 0066; multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system. Computing system can include communications interface, which can generally govern and manage the user input and system output). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Kolychev with the method/system of Ganz and Dinh to include a multimodal mode. One would have been motivated to automate verifications of potential vulnerabilities of one or more sites or code utilizing one or more neural networks (Kolychev: abstract). Regarding Claim 14; The combination of Ganz, Dinh, and Kolychev disclose the system of claim 13, Ganz discloses wherein the source code training data contains the security vulnerability (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified; par 0069; indicates on a user interface the subset of the target locations at which the vulnerability was empirically determined to exist. For example, a user interface provided that shows the source code 400 of FIG. 4 with the lines of code that were confirmed to contain vulnerabilities in operation 730 highlighted); Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Ganz et al. (US 20240184892) in view of Dinh et al. (US 20210124830), and further in view of MATROSOV et al. (“MATROSOV,” US 20190370473, published on 12/05/2019). Regarding Claim 15; The combination of Ganz and Dinh disclose the system of claim 9, Ganz discloses wherein the potentially vulnerable prompt comprises a user-provided portion (Ganz: par 0018; user input inserted into a string used for a database query without checking if the input contains control characters that modify the expected operation of the database. Machine-learning models trained to detect these vulnerabilities by accepting source code as input [] identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified); The combination of Ganz and Dinh disclose wherein the potentially vulnerable prompt comprises a user-provided portion recited above, but do not explicitly disclose a hidden portion. However, in an analogous art, MATROSOV discloses detecting vulnerabilities system/method that includes: a hidden portion (MATROSOV: par 0033; the developer could provide one or more independent portions of source code and avoid providing all of source code. if the source code for a given application (or any portions thereof) is confidential or otherwise unavailable, code analyzer can still, in some embodiments, detect vulnerabilities based on other portions of that source code and/or the associated compiled binary). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of MATROSOV with the method/system of Ganz and Dinh to include a hidden portion. One would have been motivated to implement machine learning to detect vulnerabilities in computer code. The code analyzer trains a machine learning model using training vectors that characterize vulnerable programming patterns (MATROSOV: abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAO WANG whose telephone number is (313)446-6644. The examiner can normally be reached on Monday-Friday 7:30-4:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached on (571)270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.W./Examiner, Art Unit 2439 /LUU T PHAM/Supervisory Patent Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Jun 19, 2023
Application Filed
Jul 14, 2025
Non-Final Rejection — §103
Oct 06, 2025
Examiner Interview Summary
Oct 16, 2025
Response Filed
Feb 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596797
IDENTIFY POTENTIAL PATTERNS OF COMPROMISE ON LOG FILES
2y 5m to grant Granted Apr 07, 2026
Patent 12572646
EXECUTION PROTECTION USING DATA COLOURING
2y 5m to grant Granted Mar 10, 2026
Patent 12547708
Known-Deployed File Metadata Repository and Analysis Engine
2y 5m to grant Granted Feb 10, 2026
Patent 12536275
SYSTEM FOR DETECTION OF UNAUTHORIZED COMPUTER CODE USING AN ARTIFICIAL INTELLIGENCE-BASED ANALYZER
2y 5m to grant Granted Jan 27, 2026
Patent 12511397
SECURE FIRMWARE UPLOAD
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+85.8%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month