Prosecution Insights
Last updated: April 18, 2026
Application No. 18/643,923

HEIDI: ML ON HYPERVISOR DYNAMIC ANALYSIS DATA FOR MALWARE CLASSIFICATION

Final Rejection §103§DP
Filed
Apr 23, 2024
Examiner
RAHIM, MONJUR
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
742 granted / 879 resolved
+26.4% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
26.6%
-13.4% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 879 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This action is in response to the amendment and argument field on 15 December 2025. 2. Claims 21-23 newly added. 3. Claims 1-23 remain pending and rejected. Responses to the Argument 4. The applicant’s arguments filed on 15 December 2025 have been fully considered but they are not persuasive. In the Remarks, the applicant has argued in substance: Response: Examiner respectfully disagrees, because, all the claim limitations are taught by combination of prior arts of record. Thomas teaches static and dynamic analysis of sample/file to detect a particular behavior after running a sample. The behaviors can be based on either the creation of a particular file, the content within the file, some heuristic behavior of the malware, artifact found in memory after running the malware or artifact extracted from the file system in the memory or any number of behaviors that the system detects as a result of the malware exhibiting one or more particular behaviors. On the other hand Yi teaches using machine learning method of classifying malware sample and malware detection based on dynamic API extraction according to an embodiment of the present disclosure may detect application malware by generating an API classifier that uses a machine learning algorithm with API feature information of an application and classifies API of the application as malicious or benign, and classifying API of an application running on an Android operating system-based mobile device as malicious or benign using the API classifier. And Zhang clearly teaches the method of “deep learning” to detect malicious software; It would have been obvious to one of ordinary skill in the art at the time the invention was filed to combine the method of Yi and Zhang to improve functionalities of Thomas. Since the teaching of Thomas, Yi and Zhang are the same or related technical field, combing feature would not contradict or fail the system. Double Patenting Rejection 5. Double Patenting rejection remain rejected. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15 and 17-19 are rejected under 35 U.S.C §103 as being unpatentable over Thomas et al. (US Publication No. 20160156658), hereinafter Thomas and in view of Yi et al. (US Publication No. 20200344261), hereinafter Yi. Regarding claim 1: A system, comprising: one or more processors configured to: receive a sample for malware analysis (Thomas, ¶31-32, 29). Thomas does not explicitly suggest, apply a machine learning model to obtain a classification for the sample, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶9, 15), based at least in part on (i) memory artifact data associated with the sample (Thomas, ¶43, 177-180). and (ii) at least one of dynamic execution log data for the sample and static file structures associated with the sample (Thomas, ¶43, 29, 167, 167). and determine whether the sample is malicious based at least in part on the classification (Thomas, ¶271, 239, 250). and a memory coupled to the one or more processors and configured to provide one or more processors with instructions (Thomas, ¶28). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include the method of malware detection and classification of Thomas with the use of machine learning model/classifier disclosed in Yi to improve vulnerability detection, stated by Ti at para.164. Regarding claim 2: Thomas does not explicitly suggest, wherein the machine learning model is a deep learning model, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶34). Regarding claim 3: wherein the memory artifact data associated with the sample comprises one or more of (a) application programming interface (API) pointers, (b) Operating System (OS) structure modifications, (c) page permissions modifications, and (d) API vectors (Thomas, ¶22-23). Regarding claim 4: wherein the one or more processors are further configured to monitor a behavior of the sample during execution of the sample in a virtual environment (Thomas, ¶22). Regarding claim 5: wherein the memory artifact data is obtained based at least in part on monitoring modification made in the virtual environment during execution (Thomas, ¶22, 43). Regarding claim 6: wherein to monitor the behavior of the sample comprises a dynamic analysis of an execution of the sample (Thomas, ¶21). Regarding claim 7: wherein the one or more processors are further configured to receive the sample (Thomas, ¶29). Regarding claim 8: wherein the one or more processors are further configured to: send, to a security entity, an indication that the sample is malicious (Thomas, ¶38). Regarding claim 9: wherein the one or more processors are further configured to: enforce one or more security policies based on a determination of whether the sample is malicious (Thomas, ¶272). Regarding claim 10: wherein the one or more processors are further configured to: cause the sample to be handled according to the classification (Thomas, ¶32). Regarding claim 11: wherein the one or more processors are further configured to: in response to determining the sample is malicious, update a blacklist of samples deemed to be malicious to include an identifier corresponding to the sample (Thomas, ¶47). Regarding claim 12: Thomas does not explicitly suggest, wherein the machine learning model is applied to obtain the classification for the sample based at least in part on the memory artifact data, the dynamic execution log data, and the static file structures associated with the sample, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶13-15). Regarding claim 13: wherein one or more embedding vectors used for representing the memory artifact data or the dynamic execution log data is obtained based at least in part on static file structure features (Thomas, ¶29, 24), Thomas does not explicitly suggest, obtained during training of the machine learning model however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶43). Regarding claim 14: Thomas does not explicitly suggest, wherein applying the machine learning model comprises determining a set of embedding vectors for one or more of the memory artifact data or the dynamic execution log data, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶55). Regarding claim 15: Thomas does not explicitly suggest, wherein the applying the machine learning model comprises performing a dynamic compression with respect to the set of embedding vectors to generate a set of fixed-length embedding vectors, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶67-72). Regarding claim 17: Thomas does not explicitly suggest, wherein determining the set of embedding vectors includes performing an embedding vector lookup based at least in part on a set of tokens obtained based on one or more of the memory artifact data and the dynamic execution log data however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶15-16). Regarding claim 18: receiving a sample for malware analysis (Thomas, ¶31-32, 29). Thomas does not explicitly suggest, apply a machine learning model to obtain a classification for the sample, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶9, 15), based at least in part on (i) memory artifact data associated with the sample (Thomas, ¶43, 177-180). and (ii) at least one of dynamic execution log data for the sample and static file structures associated with the sample (Thomas, ¶43, 29, 167, 167). and determining whether the sample is malicious based at least in part on the classification (Thomas, ¶271, 239, 250). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include the method of malware detection and classification of Thomas with the use of machine learning model/classifier disclosed in Yi to improve vulnerability detection, stated by Ti at para.164. Regarding claim 19: receiving a sample for malware analysis (Thomas, ¶31-32, 29). Thomas does not explicitly suggest, applying a machine learning model to obtain a classification for the sample, however, in a same filed of endeavor Yi teaches this limitation (Yi, ¶9, 15), based at least in part on (i) memory artifact data associated with the sample (Thomas, ¶43, 177-180). and (ii) at least one of dynamic execution log data for the sample and static file structures associated with the sample (Thomas, ¶43, 29, 167, 167). and determining whether the sample is malicious based at least in part on the classification (Thomas, ¶271, 239, 250). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include the method of malware detection and classification of Thomas with the use of machine learning model/classifier disclosed in Yi to improve vulnerability detection, stated by Ti at para.164. Regarding claim 21: wherein the memory artifact data comprises dynamically resolved application programming interface (API) vectors corresponding to contiguous lists of API pointers in memory during execution of the sample in a virtual environment (Thomas, ¶22-23, ¶247). Regarding claim 23: wherein applying the machine learning model comprises: generating a combined vector by concatenating (i) a first part corresponding to the memory artifact data, (ii) a second part corresponding to the dynamic execution log data, and (iii) a third part corresponding to the static file structures; and inputting the combined vector to a dense layer to obtain the classification. 7. Claim 22 is rejected under 35 U.S.C §103 as being unpatentable over Thomas in view of Yi and in view of Marbouti et al. (US Publication No. 20220366040), hereinafter Marbouti. Regarding claim 22: Thomas in view of Yi does not explicitly suggest, wherein applying the machine learning model comprises: determining a set of embedding vectors for one or more of the memory artifact data or the dynamic execution log data; performing a dynamic compression with respect to the set of embedding vectors to generate a set of fixed-length embedding vectors; and inputting the set of fixed-length embedding vectors through a one-dimensional convolutional neural network (1D CNN) and a max pooling layer to obtain a contribution to a combined vector for classification; however in a same field of endeavor Marbouti teaches this limitation (Marbouti, ¶24, ¶4). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include the method of malware detection and classification of Thomas in view of Yi with the use of one-dimensional convolutional neural network disclosed in Marbouti because the training of the CNN branch and the FNN branch is combined, may provide benefits of reducing the variance of the predictions and increasing the accuracy of the predictions, as compared to training an individual model, stated by Marbouti at para.25. 8. Claim 20 is rejected under 35 U.S.C §103 as being unpatentable over Thomas et al. (US Publication No. 20160156658), hereinafter Thomas and in view of Zhang et al. et al. (CN Publication No. 113761531), hereinafter Zhang. Regarding claim 20: Thomas does not explicitly suggest, one or more processors configured to: generate embedding vectors for memory artifact data and dynamic execution log data, however in a same filed of endeavor Zhang discloses this limitation (Zhang, page 10, para.1). generate static analysis features from file structures (Thomas, ¶43). Thomas does not explicitly suggest, perform a deep learning process to generate a malware classification based at least in part on (a) the embedding vectors for memory artifact data and dynamic execution log data, and (b) the static analysis features; however, in a same filed of endeavor Zhang discloses this limitation (Zhang, page5, para.1, page 7, para.2, page 2, para.5). and a memory coupled to the one or more processors and configured to provide one or more processors with instructions (Thomas, ¶28). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include the method of malware classification/detection of Thomas with the deep learning process to detect malware disclosed in Zhang to improve the efficiency and accuracy of malware detection stated by Zhang at page 3 para.1. 9. Claim 16 is rejected under 35 U.S.C §103 as being unpatentable over Thomas in view of Yi and in view Zhang. Regarding claim 16: Thomas in view of Yi does not explicitly suggest, wherein a convolutional neural network (CNN) is trained using the set of fixed-length embedding vectors, however, in a same field endeavor Zhang discloses this limitation (Zhang, Abstract). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include the method of malware classification/detection of Thomas in view Yi with the use of Convolution Neural Network (CNN) disclosed in Zhang to improve the efficiency and accuracy of malware detection stated by Zhang at page 3 para.1. Conclusion 10. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure (See form “PTO-892 Notice of reference cited). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MONJUR RAHIM whose telephone number is (571)270-3890. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewye Gelagay can be reached on 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Monjur Rahim/ Patent Examiner United States Patent and Trademark Office Art Unit: 2436; Phone: 571.270.3890 E-mail: monjur.rahim@uspto.gov Fax: 571.270.4890
Read full office action

Prosecution Timeline

Apr 23, 2024
Application Filed
Sep 17, 2025
Non-Final Rejection — §103, §DP
Dec 15, 2025
Response Filed
Jan 08, 2026
Applicant Interview (Telephonic)
Jan 08, 2026
Examiner Interview Summary
Apr 01, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603913
SELECTING A TRANSMISSION PATH FOR COMMUNICATING DATA BASED ON A CLASSIFICATION FOR THE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12596807
UNIFIED EXTENSIBLE FIRMWARE INTERFACE (UEFI)-LEVEL PROCESSING OF OUT-OF-BAND COMMANDS IN HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Apr 07, 2026
Patent 12598458
METHODS AND DEVICES FOR SECURE COMMUNICATION WITH AND OPERATION OF AN IMPLANT
2y 5m to grant Granted Apr 07, 2026
Patent 12580742
SECURE MEMORY SYSTEM PROGRAMMING FOR HOST DEVICE VERIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574214
DISTRIBUTION AND USE OF ENCRYPTION KEYS TO DIRECT COMMUNICATIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+16.1%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 879 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month