Prosecution Insights
Last updated: April 19, 2026
Application No. 18/812,003

SYSTEM FOR EXTRACTING MALWARE CAPABILITIES AND METHOD THEREOF

Non-Final OA §103
Filed
Aug 22, 2024
Examiner
JOHNSON, CARLTON
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Indian Institute Of Technology Kanpur
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
4y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
205 granted / 352 resolved
At TC average
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
26 currently pending
Career history
378
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 352 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. Claims 1 - 8 are pending. Claims 1, 5 are independent. 2. This application was filed on 8-22-2024. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1 - 8 are rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al. (US PGPUB No. 20190228154) in view of Bin Huraib et al. (US PGPUB No. 20240070261). Regarding Claims 1, 5, Agrawal discloses a system for determining malignant capabilities of one or more malwares and a method for determining malignant capabilities of one or more malwares, the system and method comprising: a) one or more hardware processors; and a memory coupled to the one or more hardware processors, wherein the memory comprises a plurality of subsystem executable by the one or more hardware processors (see Agrawal paragraph [0077]: A system disclosed herein includes a memory, one or more processors, and a malware sequence detection system stored in the memory and executable by the one or more processor units, the malware sequence detection system encoding computer-executable instructions on the memory for executing on the one or more processor units a computer process to provide malware sequence detection, the computer process including dividing a sequence of plurality of events into a plurality of subsequences, performing sequential subsequence learning on one or more of the plurality of subsequences,), and wherein the plurality of subsystems comprises: b) a malware execution subsystem (see Agrawal paragraph [0002]: a malware sequence detection system for detecting the presence of malware in a plurality of events. An implementation of the malware sequence detection includes receiving a sequence of a plurality of events, and detecting presence of a sequence of malware commands within the sequence of a plurality of events) configured to: c) execute a malware application in an isolated computing environment; (see Agrawal paragraph [0016]: due to the fact that certain commands have to be run in some order related to the functionality of the malware or in combination with some other sequence of malware instructions for the malware to take effect, defense is possible if the software can be executed in a secure environment and malicious actions can be detected during this emulation.) and e) obtain one or more system application programming interface (API) calls and executed timestamp data from the executed malware application; (see Agrawal paragraph [0020]: The client 112 may be a standalone client computing device that is affected by one or more operations of the event source 150, 152, 154, etc. For example, the client 112 may be a device that makes one or more API calls to the operating system with executable files at event source B 152.; paragraph [0034]: The recurrent layer 410 may have a number of cells for recurrent layers that may be used with long event sequences. E: [E(0)-E(T−1)] 402 is a list of events with t being the time-stamp.) f) a malware activity capturing subsystem operatively coupled to the malware execution subsystem (see Agrawal paragraph [0002]: a malware sequence detection system for detecting the presence of malware in a plurality of events. An implementation of the malware sequence detection includes receiving a sequence of a plurality of events, and detecting presence of a sequence of malware commands within the sequence of a plurality of events by dividing the sequence of plurality of events into a plurality of subsequences, performing sequential subsequence learning on one or more of the plurality of subsequences,) configured to: g) sort the system application programming interface (API) calls based on the obtained timestamp data; (see Agrawal paragraph [005]: Subsequently, the corpus of n-grams output by the n-gram operation 912 are sorted at 914 in order of their occurrences (based upon timestamp); paragraph [0020]: The client 112 may be a standalone client computing device that is affected by one or more operations of the event source 150, 152, 154, etc. For example, the client 112 may be a device that makes one or more API calls to the operating system with executable files at event source B 152.) h) generate, by a trigram technique, a trigram sequence from the sorted system API calls; (see Agrawal paragraph [0032]: A parameter learning operation 320 extracts and adds additional information from the parameters associated with the events. In one implementation, the parameter learning operation 320 may include generating n-grams (e.g., a bigram, a trigram, etc.) from the parameters, sorts the n-grams and selects a predetermined number of n-grams having higher frequencies. The selected n-grams may be used to generate a k-vector representing the presence (“1”) or absence (“0”) of the n-gram with an event and a tanh, a sigmoid, or a rectified linear unit (ReLU) operation may generate an output value of the k-vector.) i) process, by one hot encoding technique, the trigram sequence to generate one or more feature vectors; (see Agrawal paragraph [0032]: A parameter learning operation 320 extracts and adds additional information from the parameters associated with the events. In one implementation, the parameter learning operation 320 may include generating n-grams (e.g., a bigram, a trigram, etc.) from the parameters, sorts the n-grams and selects a predetermined number of n-grams having higher frequencies. The selected n-grams may be used to generate a k-vector representing the presence (“1”) or absence (“0”) of the n-gram with an event and a tanh, a sigmoid, or a rectified linear unit (ReLU) operation may generate an output value of the k-vector.; paragraph [0034]: the embedding layer 404 can be set using methods such as Word2Vec or Glove. In another implementation, the embedding layer 404 can be set using a method such as one-hot encoding. In one-hot encoding, for example, an event E(0) may be represented by setting bit 37 of an event vector 406.sub.0 to “1” and all other bits of the event vector 406.sub.0 to “0”, an event E(1) may be represented by setting bit 67 of an event vector 406.sub.1 to “1” and all other bits of the event vector 406.sub.1 to “0”, etc.) j) a malware capability extraction subsystem operatively coupled to the malware activity capturing subsystem (see Agrawal paragraph [0040]: Using the max-pooled layer 520 allows extracting information from each event of the event sequence E(t) 502 in generating the probability of the event sequence E(t) 502 being malware.) configured to: h) classify, by a multi-label deep neural network (DNN), the received feature vectors based on one or more malignant capabilities of the executed malware; (see Agrawal paragraph [0030]: The max-pooled layer output is classified at an operation 240 to generate a probability of the event sequence including malware. In one implementation, the classifying of the max-pooled layer output may use a machine learning classifier such as a neural network, deep neural network, decision tree, boosted decision tree, support vector machine, naïve bayes, or logistic regression. Subsequently, a determining operation 260 compares the probability value P.sub.m generated by the classification block with a second threshold to determine whether the event sequence, such as the event sequence 140 disclosed in FIG. 1 includes any malware.; paragraph [0036]: The classification block 430 may be a neural network or a deep neural network (multi-layer perceptron (MLP)) which includes one or more hidden layers 432 and an output layer 434.) Furthermore, Agrawal discloses wherein for d) detect one or more changes in system instances upon execution in a secure environment. (see Agrawal paragraph [0016]: due to the fact that certain commands have to be run in some order related to the functionality of the malware or in combination with some other sequence of malware instructions for the malware to take effect, defense is possible if the software can be executed in a secure environment and malicious actions can be detected during this emulation.) Agrawal does not specifically disclose for d) execution of the malware application in the isolated computing environment, and for i) generate a threat report based on analysis of the classified malignant capabilities of the executed malware. However, BinHuraib discloses wherein for d) detect one or more changes in system instances, upon execution of the malware application in the isolated computing environment. (see BinHuraib paragraph [0031]: the dynamic analysis component 130 can execute detected malware in a safe computing environment, called a sandbox environment. As used herein, the term “sandbox environment” can refer to an isolated testing environment that can enable programs (e.g., malware) and/or files to be inspected, opened, and/or executed without affecting the computing system hosting the sandbox environment.) And, BinHuraib discloses for i) generate a threat report based on analysis of the classified malignant capabilities of the executed malware. (see Agrawal paragraph [0056]: generating (e.g., via the profile engine 124 and/or processing units 109) one or more profile reports describing the detected malware. In accordance with one or more embodiments described herein, the one or more profile reports can include indicators, operational data, digital signatures, and/or data correlations resulting from the method 300 and/or associated with the detect malware. In various embodiments, the security software 128 can be updated and/or reconfigured based on the one or more profile reports to thwart future cybersecurity threats.) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Agrawal for d) execution of the malware application in the isolated computing environment, and for i) generate a threat report based on analysis of the classified malignant capabilities of the executed malware. as taught by BinHuraib. One of ordinary skill in the art would have been motivated to employ the teachings of BinHuraib for the enhanced security of a system that enables execution of malware in an isolated and secure execution environment. (see BinHuraib paragraph [0031]) Regarding Claim 2, Agrawal-BinHuraib discloses the system as claimed in claim 1, wherein the malware capability extraction subsystem further comprises: receiving the one or more feature vectors; grouping the received one or more feature vectors with historical behavioral instances to generate training samples; (see Agrawal paragraph [0019]: Data collected previously from the event sources 150, 152, 154 may be used to train the malware sequence detection system 102.; (historical data)), and training the multi-label deep neural network (DNN), based on the training samples, for extraction of the malignant capabilities of the executed malware. (see Agrawal paragraph [0030]: The max-pooled layer output is classified at an operation 240 to generate a probability of the event sequence including malware. In one implementation, the classifying of the max-pooled layer output may use a machine learning classifier such as a neural network, deep neural network, decision tree, boosted decision tree, support vector machine, naïve bayes, or logistic regression. Subsequently, a determining operation 260 compares the probability value P.sub.m generated by the classification block with a second threshold to determine whether the event sequence, such as the event sequence 140 disclosed in FIG. 1 includes any malware.; paragraph [0036]: The classification block 430 may be a neural network or a deep neural network (multi-layer perceptron (MLP)) which includes one or more hidden layers 432 and an output layer 434.) Regarding Claims 3, 7, Agrawal-BinHuraib discloses the system as claimed in claim 1 and the method as claimed in claim 5, executed in a secure environment. (see Agrawal paragraph [0016]: due to the fact that certain commands have to be run in some order related to the functionality of the malware or in combination with some other sequence of malware instructions for the malware to take effect, defense is possible if the software can be executed in a secure environment and malicious actions can be detected during this emulation.) Agrawal does not specifically disclose the established isolated computing environment is a sandbox environment. However, BinHuraib discloses wherein the established isolated computing environment is a sandbox environment. (see BinHuraib paragraph [0031]: the dynamic analysis component 130 can execute detected malware in a safe computing environment, called a sandbox environment. As used herein, the term “sandbox environment” can refer to an isolated testing environment that can enable programs (e.g., malware) and/or files to be inspected, opened, and/or executed without affecting the computing system hosting the sandbox environment.; (sandbox, isolated execution environment)) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Agrawal for the established isolated computing environment is a sandbox environment as taught by BinHuraib. One of ordinary skill in the art would have been motivated to employ the teachings of BinHuraib for the enhanced security of a system that enables execution of malware in an isolated and secure execution environment. (see BinHuraib paragraph [0031]) Regarding Claim 4, 8, Agrawal-BinHuraib discloses the system as claimed in claim 1 and the method as claimed in claim 5, wherein the one or more malignant capabilities of the executed malware comprises at least one of a process injection capability, an anti-debugging capability, a scanning capability, a discover running processes, a crypto ransomware, an evasion capability, an alter configuration capability, an installed software exploration capability, a registry modification capability, a service impairment capability and a spying capability. (see Agrawal paragraph [0017]: In other instances, the malware may be a series of execution codes which is injected into a running process.; (selected: a process injection capability)) Regarding Claim 6, Agrawal-BinHuraib discloses the method as claimed in claim 5, wherein classifying, by the multi-label deep neural network (DNN), the received feature vectors based on one or more malignant capabilities of the executed malware, further comprises: receiving the one or more feature vectors; grouping the received one or more feature vectors with historical behavioral instances to generate training samples; and training the multi-label deep neural network (DNN), based on the training samples, for extraction of the malignant capabilities of the executed malware. (see Agrawal paragraph [0035]: The vectors of the event vector sequence 406 is input to the recurrent layer 410. In one implementation, the recurrent layer 410 may act as a language model. (see Agrawal paragraph [0019]: Data collected previously from the event sources 150, 152, 154 may be used to train the malware sequence detection system 102.; (historical data); paragraph [0030]: The max-pooled layer output is classified at an operation 240 to generate a probability of the event sequence including malware. In one implementation, the classifying of the max-pooled layer output may use a machine learning classifier such as a neural network, deep neural network, decision tree, boosted decision tree, support vector machine, naïve bayes, or logistic regression. Subsequently, a determining operation 260 compares the probability value P.sub.m generated by the classification block with a second threshold to determine whether the event sequence, such as the event sequence 140 disclosed in FIG. 1 includes any malware.; paragraph [0036]: The classification block 430 may be a neural network or a deep neural network (multi-layer perceptron (MLP)) which includes one or more hidden layers 432 and an output layer 434.)) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLTON JOHNSON whose telephone number is (571)270-1032. The examiner can normally be reached Work: 12-9PM (most days). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached at 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CJ/ February 9, 2026 /KHOI V LE/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Aug 22, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604197
METHODS AND SYSTEMS FOR ALLOWING DEVICE TO SEND AND RECEIVE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12526638
METHODS AND SYSTEMS FOR ALLOWING DEVICE TO SEND AND RECEIVE DATA
2y 5m to grant Granted Jan 13, 2026
Patent 12515614
ELECTRONIC CONTROL UNIT AND COMMUNICATION SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12518656
SECRET SIGMOID FUNCTION CALCULATION SYSTEM, SECRET LOGISTIC REGRESSION CALCULATION SYSTEM, SECRET SIGMOID FUNCTION CALCULATION APPARATUS, SECRET LOGISTIC REGRESSION CALCULATION APPARATUS, SECRET SIGMOID FUNCTION CALCULATION METHOD, SECRET LOGISTIC REGRESSION CALCULATION METHOD AND PROGRAM
2y 5m to grant Granted Jan 06, 2026
Patent 12452239
METHODS AND SYSTEMS FOR ALLOWING DEVICE TO SEND AND RECEIVE DATA
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
90%
With Interview (+32.1%)
4y 11m
Median Time to Grant
Low
PTA Risk
Based on 352 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month