Prosecution Insights
Last updated: April 19, 2026
Application No. 18/305,940

MALICIOUS PATTERN MATCHING USING GRAPH NEURAL NETWORKS

Final Rejection §103§112
Filed
Apr 24, 2023
Examiner
RONI, SYED A
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Avast Software s.r.o.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
537 granted / 655 resolved
+24.0% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
26 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§103 §112
DETAILED ACTION 713.09 Interviews Between Final Rejection and Notice of Appeal [R-08.2017] Normally, one interview after final rejection is permitted in order to place the application in condition for allowance or to resolve issues prior to appeal. However, prior to the interview, the intended purpose and content of the interview should be presented briefly, preferably in writing. Such an interview may be granted if the examiner is convinced that disposal or clarification for appeal may be accomplished with only nominal further consideration. Interviews merely to restate arguments of record or to discuss new limitations which would require more than nominal reconsideration or new search should be denied. See MPEP § 714.13. Interviews may be held after the expiration of the shortened statutory period and prior to the maximum permitted statutory period of 6 months without an extension of time. See MPEP § 706.07(f). A second or further interview after a final rejection may be held if the examiner is convinced that it will expedite the issues for appeal or disposal of the application. For interviews after notice of appeal, see MPEP § 1204.03. Interview time will be revised to a limit of 1 hour per new application or RCE (utility)/CPA (design), when during prosecution, the examiner conducts an interview. When more than one interview is needed in an application supervisors will have the flexibility to approve additional time and ensure that the interviews are being used to advance prosecution. Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In response to the claims amendment in view of the Remarks field 12/17/2025, the claims objection have been withdrawn. In response to the claims amendment in view of the Remarks, the 101 rejection have been withdrawn. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2 – 5 and 9 - 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The terms “degree of relatedness” in the claims 2 – 3 and 9 - 10 are a relative term which renders the claim indefinite. The terms “degree of relatedness” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claims 4 – 5 and 11 - 12 are dependent claims and thus also rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 15 are rejected under 35 U.S.C. 103 as being unpatentable over prior art of record, Zong et al., (US 2021/0089652 A1) (hereinafter “Zong”) in view of the prior art of record, Li et al., (US 2019/0354689 A1) (hereinafter “Li”). Regarding claim 1, Zong discloses; a method of identifying malicious activity in a sequence of computer instructions [i.e., a method of identifying malware (page 2, para 0022), (see figure 2)], comprising: identifying a plurality of behaviors [i.e., actions within the computer system including information relating to an originating process, a target, and a type of action (page 2, para 0022) i.e., creating a process, opening a file, etc., (page 2, para 0021)] of the sequence of computer instructions [i.e., collects syscall 120 information from the application on a computer system…these syscall 120 can reflect any appropriate action within the computer system, and can include…type of action (page 2, para 0022), (see figures 1 and reference 202 of figure 2)] during execution of the sequence of computer instructions on a computer system [i.e., during actual execution of the program code (page 3, para 0039) i.e., a first syscall graph is generated by a first application (page 1, para 0004) Note; a syscall graph is derived from execution of program i.e., a sequence of instructions producing system calls. A sequence of computer instructions is reasonably interpreted as an executing program whose behavior is observable via system call i.e., collect syscall information i.e., syscall graphs are generated by a running application () Note; collecting syscall information generated during execution of an application, which corresponds to monitoring instruction executing at runtime]; representing the plurality of identified behaviors as a graph [i.e., generating a syscall graph for the various software applications, with nodes of the syscall graph representing system entities (e.g., processes, files…edges representing interactions between entities, and edge attributes representing profile information of interactions (e.g., creating a process, opening a file, etc.,) (page 2, para 0021), (page 4, para 0046)]; providing the graph to a graph neural network [i.e., the graph vector model 710 is implemented as an artificial neural network (ANN) (page 4, para 0049), (see figure 7) i.e., convolutional neural network (page 4, para 0050)] that is trained [i.e., ANN are furthermore trained in-use (page 4, para 0049)] to generate a geometric representation of the sequence of computer instructions [i.e., a graph vector model 710 uses graphs formed by the syscall 708 to represent the processes of the malware 110 as vectors (page 4, para 0046), (page 2, para 0023), (see figure 7 and reference 204 of figure 2)], wherein the graph neural network is pre-trained using a plurality of base graphs prior to receiving the sequence of computer instructions [i.e., a graph vector model (see ref. 710 of figure 7) deployed within a malware detection system. the model is used to process incoming data and generate outputs (page 4, para 0046 and 0048). Note; This is the standard inference model of a trained machine learning model. Accordingly, Zong inherently discloses a model trained prior to deployment (i.e., pre-trained)]; determining a distance [i.e., similarity scores (page 2, para 0023)] between the geometric representation of the computer instructions and a plurality of base graphs [i.e., identifying syscall graph vectors that are dissimilar to the vectors of legitimate applications 706 (page 4, para 0046), (see figure 7) i.e., block 206 then compares the graph distribution vectors, using any appropriate similarity metric to generate similarity score (page 2, para 0023), (see figure 2)] without retraining the graph neural network using the sequence of computer instructions [i.e., a graph vector model (see ref. 710 of figure 7) deployed within a malware detection system. At runtime, the system logs syscalls, generates vector representations, and compares them to detect anomalies (see figure 2), (page 4, para 0046 and 0048). Note; there is no disclosure in Zong of retraining during detection. Instead, the model is used to process incoming data and generate outputs. This is the standard inference model of a trained machine learning model]; and determining whether the sequence of computer instructions is likely malicious based on the distance between the geometric representation of the computer instructions and one or more base graphs [i.e., block 208 identifies anomalous graph, based on the similarity scores for each of the graph distribution vectors (page 2, para 0024), (see figure 2) i.e., the security console 712 uses the vectors to identify abnormal application behavior (page 4, para 0046), (see figure 7)]. Zong does not disclose; including base graphs known to be malicious However, Li discloses; determining a distance between a geometric representation of the computer instructions and a plurality of base graphs including base graphs known to be malicious [i.e., determine a similarity score for the first control flow graph, the first control flow graph associated with a first binary function having a known vulnerablity and the second and the second control flow graph using the neural network system (page 4, para 0042), (page 9, para 0115), (see reference S605 of figure 6)]; and determining whether the sequence of computer instructions is likely malicious based on the distance between the geometric representation of the computer instructions and one or more base graphs known to be malicious [i.e., determining that second binary function is vulnerable if the similarity score exceeds a threshold similarity score (page 4, para 0042)]. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Zong by adapting the teachings Li to be particularly useful where access to source code in not available when dealing with commercial or embedded software or suspicious executables (See Li; page 9, para 0117). Regarding claim 2, Zong discloses; the method of identifying malicious activity in a sequence of computer instructions of claim 1, wherein: determining a degree of relatedness between the geometric representation of the computer instructions and a plurality of base graphs further includes one or more base graphs known to be clean [i.e., identifying syscall graph vectors that are dissimilar to the vectors of legitimate applications 706 (page 4, para 0046), (see figure 7) i.e., block 206 then compares the graph distribution vectors, using any appropriate similarity metric to generate similarity score (page 2, para 0023), (see figure 2)]; and determining whether the sequence of computer instructions is likely malicious further comprises determining a degree of relatedness between the geometric representation of the computer instructions and one or more base graphs known to be clean [i.e., given a collection of syscall graphs for legitimate software, syscall graphs for malware 110 is detected as being dissimilar from normal and expected graph (page 2, para 0021 and 0024), (see figure 2) i.e., the security console 712 uses the vectors to identify abnormal application behavior (page 4, para 0046), (see figure 7)]. Regarding claim 3, Zong discloses; the method of identifying malicious activity in a sequence of computer instructions of claim 1, wherein determining a degree of relatedness further comprises determining a distance between the geometric representation of the computer instructions and one or more base graphs [i.e., the distance between two graphs can then by estimated using the distance between their respective distribution vectors (page 1, para 0018)]. Regarding claim 4, Zong discloses; the method of identifying malicious activity in a sequence of computer instructions of claim 3, wherein determining a distance comprises using a cosine metric of the distance between the geometric representation of the computer instructions and one or more base graphs [i.e., block 206 then compares the graph distribution vectors using the cosine similarity (page 2, para 0023), (see figure 2)]. Regarding claim 5, Zong discloses; the method of identifying malicious activity in a sequence of computer instructions of claim 4, wherein the geometric representation is configured to allow for non-relevant differences between otherwise related graphs [i.e., cosine similarity (page 2, para 0023), (see figure 2)]. Regarding claim 6, Zong discloses; the method of identifying malicious activity in a sequence of computer instructions of claim 1, wherein the graph neural network is trained [i.e., (see claim 1 above)]. Zong does not disclose; with a triplet loss function. However, Li discloses; with a triplet loss function [i.e., the one or more neural networks are trained based upon optimization of a loss function based upon a relative similarity between a triplet of graphs (page 3, para 0035)]. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Zong by adapting the teachings Li to be particularly useful where access to source code in not available when dealing with commercial or embedded software or suspicious executables (See Li; page 9, para 0117). Regarding claim 7, Zong discloses; the method of identifying malicious activity in a sequence of computer instructions of claim 1, wherein the graph neural network is implemented in a computerized system [i.e., a malware detection system 700 (see figure 7), (page 4, para 0045), (page 1, para 0005)]. Regarding claim 8, Zong discloses; a computerized system operable to identify malicious activity in a target sequence of computer instructions [i.e., a malware detection system 700 (see figure 7), (page 4, para 0045), (page 1, para 0005)], comprising: a processor [i.e., hardware processor 702 (see figure 7), (page 4, para 0045), (page 1, para 0005)] operable to execute computer instructions [i.e., executed by the hardware processor (page 1, para 0005)]; a stored sequence of computer instructions [i.e., memory is configured to store a computer program (see figure 7), (page 4, para 0045), (page 1, para 0005)] operable when executed on the processor to: identify a plurality of behaviors [i.e., actions within the computer system including information relating to an originating process, a target, and a type of action (page 2, para 0022) i.e., creating a process, opening a file, etc., (page 2, para 0021)] of the target sequence of computer instructions [i.e., collects syscall 120 information from the application on a computer system…these syscall 120 can reflect any appropriate action within the computer system, and can include…type of action (page 2, para 0022), (see figures 1 and reference 202 of figure 2)], during execution of the target sequence of computer instructions on the processor [i.e., during actual execution of the program code (page 3, para 0039) i.e., a first syscall graph is generated by a first application (page 1, para 0004) Note; a syscall graph is derived from execution of program i.e., a sequence of instructions producing system calls. A sequence of computer instructions is reasonably interpreted as an executing program whose behavior is observable via system call i.e., collect syscall information i.e., syscall graphs are generated by a running application () Note; collecting syscall information generated during execution of an application, which corresponds to monitoring instruction executing at runtime]; represent the plurality of identified behaviors as a graph [i.e., generating a syscall graph for the various software applications, with nodes of the syscall graph representing system entities (e.g., processes, files…edges representing interactions between entities, and edge attributes representing profile information of interactions (e.g., creating a process, opening a file, etc.,) (page 2, para 0021), (page 4, para 0046)]; provide the graph to a graph neural network [i.e., the graph vector model 710 is implemented as an artificial neural network (ANN) (page 4, para 0049), (see figure 7) i.e., convolutional neural network (page 4, para 0050)] that is trained [i.e., ANN are furthermore trained in-use (page 4, para 0049)] to generate a geometric representation of the target sequence of computer instructions [i.e., a graph vector model 710 uses graphs formed by the syscall 708 to represent the processes of the malware 110 as vectors (page 4, para 0046), (page 2, para 0023), (see figure 7 and reference 204 of figure 2)], wherein the graph neural network is pre-trained using a plurality of base graphs prior to receiving the target sequence of computer instructions [i.e., a graph vector model (see ref. 710 of figure 7) deployed within a malware detection system. the model is used to process incoming data and generate outputs (page 4, para 0046 and 0048). Note; This is the standard inference model of a trained machine learning model. Accordingly, Zong inherently discloses a model trained prior to deployment (i.e., pre-trained)]; determine a distance [i.e., similarity scores (page 2, para 0023)] between the geometric representation of the computer instructions and a plurality of base graphs [i.e., identifying syscall graph vectors that are dissimilar to the vectors of legitimate applications 706 (page 4, para 0046), (see figure 7) i.e., block 206 then compares the graph distribution vectors, using any appropriate similarity metric to generate similarity score (page 2, para 0023), (see figure 2)] without retraining the graph neural network using the target sequence of computer instructions [i.e., a graph vector model (see ref. 710 of figure 7) deployed within a malware detection system. At runtime, the system logs syscalls, generates vector representations, and compares them to detect anomalies (see figure 2), (page 4, para 0046 and 0048). Note; there is no disclosure in Zong of retraining during detection. Instead, the model is used to process incoming data and generate outputs. This is the standard inference model of a trained machine learning model]; and determine whether the sequence of computer instructions is likely malicious based on a the distance between the geometric representation of the target computer instructions and one or more base graphs [i.e., block 208 identifies anomalous graph, based on the similarity scores for each of the graph distribution vectors (page 2, para 0024), (see figure 2) i.e., the security console 712 uses the vectors to identify abnormal application behavior (page 4, para 0046), (see figure 7)]. Zong does not disclose; including base graphs known to be malicious However, Li discloses; determine a distance between a geometric representation of the computer instructions and a plurality of base graphs including base graphs known to be malicious [i.e., determine a similarity score for the first control flow graph, the first control flow graph associated with a first binary function having a known vulnerability and the second and the second control flow graph using the neural network system (page 4, para 0042), (page 9, para 0115), (see reference S605 of figure 6)]; and determine whether the sequence of computer instructions is likely malicious based on the distance between the geometric representation of the computer instructions and one or more base graphs known to be malicious [i.e., determining that second binary function is vulnerable if the similarity score exceeds a threshold similarity score (page 4, para 0042)]. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Zong by adapting the teachings Li to be particularly useful where access to source code in not available when dealing with commercial or embedded software or suspicious executables (See Li; page 9, para 0117). Regarding claim 9, Zong discloses; the computerized system operable to identify malicious activity in a target sequence of computer instructions of claim 8, wherein: determining a degree of relatedness between the geometric representation of the target computer instructions and a plurality of base graphs further includes one or more base graphs known to be clean [i.e., identifying syscall graph vectors that are dissimilar to the vectors of legitimate applications 706 (page 4, para 0046), (see figure 7) i.e., block 206 then compares the graph distribution vectors, using any appropriate similarity metric to generate similarity score (page 2, para 0023), (see figure 2)]; and determining whether the target sequence of computer instructions is likely malicious further comprises determining a degree of relatedness between the geometric representation of the target computer instructions and one or more base graphs known to be clean [i.e., given a collection of syscall graphs for legitimate software, syscall graphs for malware 110 is detected as being dissimilar from normal and expected graph (page 2, para 0021 and 0024), (see figure 2) i.e., the security console 712 uses the vectors to identify abnormal application behavior (page 4, para 0046), (see figure 7)]. Regarding claim 10, Zong discloses; the computerized system operable to identify malicious activity in a target sequence of computer instructions of claim 8, wherein determining a degree of relatedness further comprises determining a distance between the geometric representation of the target computer instructions and one or more base graphs [i.e., the distance between two graphs can then by estimated using the distance between their respective distribution vectors (page 1, para 0018)]. Regarding claim 11, Zong discloses; the computerized system operable to identify malicious activity in a target sequence of computer instructions of claim 10, wherein determining a distance comprises using a cosine metric of the distance between the geometric representation of the target computer instructions and one or more base graphs [i.e., block 206 then compares the graph distribution vectors using the cosine similarity (page 2, para 0023), (see figure 2)]. Regarding claim 12, Zong discloses; the computerized system operable to identify malicious activity in a sequence of computer instructions of claim 11, wherein the cosine metric is configured to allow for non-relevant differences between otherwise related graphs [i.e., cosine similarity (page 2, para 0023), (see figure 2)]. Regarding claim 13, Zong discloses; the computerized system operable to identify malicious activity in a sequence of computer instructions of claim 8, wherein the graph neural network is trained [i.e., (see claim 1 above)]. Zong does not disclose; with a triplet loss function. However, Li discloses; with a triplet loss function [i.e., the one or more neural networks are trained based upon optimization of a loss function based upon a relative similarity between a triplet of graphs (page 3, para 0035)]. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Zong by adapting the teachings Li to be particularly useful where access to source code in not available when dealing with commercial or embedded software or suspicious executables (See Li; page 9, para 0117). Regarding claim 14, Zong discloses; the computerized system operable to identify malicious activity in a sequence of computer instructions of claim 8, wherein the stored sequence of computer instructions is executed on an end user computerized device [i.e., machine-readable storage devices (page 3, para 0038)]. Regarding claim 15, Zong discloses; the computerized system operable to identify malicious activity in a sequence of computer instructions of claim 8, wherein the stored sequence of computer instructions is executed on a remote server [i.e., remote storage device (page 3, para 0040)]. Response to Arguments Applicant's arguments in the Remarks filed 12/17/2025 have been fully considered but they are not persuasive because of the followings; Regarding claims 1 and 8; applicant argued that “By contrast, Zong describes analyzing syscall graphs and identifying anomalous behavior, but does not teach, suggest, or disclose performing distance-based classification of an executing sequence of computer instructions against known malicious base graphs using a pre-trained graph neural network without retraining as now called for in independent claims 1 and 8 of the present application. Instead, Zong discusses similarity and anomaly analysis, but does not teach, suggest or disclose the disclose the claimed separation between training and runtime evaluation phase as now claimed in independent claims 1 and 8” (See Remarks; page 9). The Examiner respectfully disagrees with this argument because Zong explicitly teaches generating vector representations of syscall graphs, and comparing those vector representations to other graphs to determine similarity scores, and determining abnormal behavior based on those similarity scores (see figure 2), (page 2, para 0022 – 0025). A similarity score between vector representations is a distance (or similarity) metric in embedding space and using that metric to determine abnormality constitutes distance-based classification. The claim language does not require any specific mathematical form of “distance,” and Zong disclosure of similarity scoring satisfies this limitation. Zong further discloses a malware detection system that operates over applications including malware (see figures 1 and 7), (page 1, para 0019 - 0022) and (page 4, para 0045). The system compares a target syscall graph against graphs generated by other applications, which necessarily include both legitimate and malicious applications. Thus, the “one or more second syscall graphs” used for comparison in Zong constitute a set of base graphs, and the inclusion of malware in the system (explicitly shown in Zong’s architecture) indicates that known malicious graphs are part of the comparison set. Applicant assertion that Zong does not disclose using a “pre-trained graphs neural network” without retraining during runtime is incorrect because such behavior is inherent in Zong’s system because Zong discloses a graph vector model (see ref. 710 of figure 7) deployed within a malware detection system. At runtime, the system logs syscalls, generates vector representations, and compares them to detect anomalies (see figure 2). There is no disclosure in Zong of retraining during detection. Instead, the model is used to process incoming data and generate outputs. This is the standard inference model of a trained machine learning model. Accordingly, Zong inherently discloses a model trained prior to deployment (i.e., pre-trained) and use that model during runtime without retraining. It is well understood in the art that a machine learning system operates with a separation between training/learning and inference phase and Zong system follows this conventional paradigm. Rearing claims 1 and 8; applicant further argued that “And, while Li discusses graph similarity learning and training techniques, including distance-based loss function, it does not disclose runtime malware detection based on execution behavior, nor does Li disclose classification execution instruction sequence using a pre-trained model without retraining as now required in independent claims 1 and 8 of the present application. Evan if Zong and Li were combined, the combination does not teach or suggest the limitations of independent claims 1 and 8 (as amended)” (See Remarks; page 9) The Examiner respectfully brings the applicant’s attention to the fact that this is a 103-obviousness rejection in which the Examiner relied upon the primary reference Zong for teaching every single claim limitation except a base graph known to be malicious. The Examiner relied upon the secondary reference Li for teaching this particular limitation (See the office action above). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED A RONI whose telephone number is (571)270-7806. The examiner can normally be reached M-F 9:00-5:00 pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED A RONI/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
Sep 15, 2025
Non-Final Rejection — §103, §112
Dec 17, 2025
Response Filed
Mar 27, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591684
CENTRALIZED SECURITY ANALYSIS AND MANAGEMENT OF SOURCE CODE IN NETWORK ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12574354
CLIENT FILTER VPN
2y 5m to grant Granted Mar 10, 2026
Patent 12572379
Static Trusted Execution Environment for Inter-Architecture Processor Program Compatibility
2y 5m to grant Granted Mar 10, 2026
Patent 12561420
SYSTEM AND METHOD FOR AUTHENTICATING USERS VIA PATTERN BASED DIGITAL RESOURCES ON A DISTRIBUTED DEVELOPMENT PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12547760
METHOD FOR EVALUATING THE RISK OF RE-IDENTIFICATION OF ANONYMISED DATA
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month