Prosecution Insights
Last updated: April 19, 2026
Application No. 18/306,861

MALWARE ANALYSIS USING GROUP TESTING

Final Rejection §103§112
Filed
Apr 25, 2023
Examiner
SHIFERAW, ELENI A
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Avast Software s.r.o.
OA Round
2 (Final)
37%
Grant Probability
At Risk
3-4
OA Rounds
5y 1m
To Grant
73%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 132 resolved
-20.9% vs TC avg
Strong +36% interview lift
Without
With
+35.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
10 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§103 §112
DETAILED ACTION In response to communications received 4/25/2023, this is the first Office action on the merits. The claims 1 – 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 304. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The use of the terms Microsoft Word and Bluetooth, which are trade names or marks used in commerce, has been noted in this application. The terms should be accompanied by the generic terminology; furthermore, the terms should be capitalized wherever they appear or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. The disclosure is objected to because of the following informalities: [0035] recites that “Set 1 and Set 4 each have malicious behavior in at least one of their instruction set sequences”. However, based on the Figure 2 representation, it appears that “Set 4” recitation should be replaced with “Set 2” to conform with the Figure representation and idea described in [0035]. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites the limitation “isolating the groups having the sequence of computer instructions that is likely malicious”. Independent claims 12 and 20 recite the same limitation. The limitation in question does not satisfy the written description requirement under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph. There exists only mere recitations of the idea of isolation, without any written description support regarding specifically how the isolation takes place or what algorithm is used to perform said isolation of the specific groups of instructions. Further, the recitations of isolation within the written specification are only directed to the isolation of computing environments, not the isolation of the particular identified instructions. Claim 6 recites the limitation “wherein using group testing comprises using a nested group testing algorithm”. Claim 17 recites the same limitation. There exists no mention or description of the particular “nested” group testing algorithm. Although the specification describes a group testing algorithm and how it accomplishes the determination of malicious instructions in [0029], the specification is silent as to the particularity of a “nested” group testing algorithm. Examiner sees the “nested group testing algorithm” as analogous to a testing algorithm performed on nested data compiled in a group. The specification does not describe the limitation in sufficient detail so that one of ordinary skill in the art would recognize that the applicant had possession of the claimed invention. MPEP 2163(I)(A) states “…issues of adequate written description may arise even for original claims, for example, when an aspect of the claimed invention has not been described with sufficient particularity such that one skilled in the art would recognize that the inventor had possession of the claimed invention at the time of filing”. Merely pointing to an original claim does not satisfy the written description requirement, unless the claim itself conveys enough information to show that the inventor had possession of the claimed invention at the time of filing. MPEP 2161.01 states “Similarly, original claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the results is achieved. For software, this can occur when the algorithm or steps/procedure are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed”. See MPEP 2163.02 and 2181, subsection IV. As in MPEP 2161.01(I), “The description requirement of the patent statute requires a description of an invention, not an indication of a result that one might achieve if one made that invention”. It is not enough that one skilled in the art could design a method to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See, e.g., Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681 – 683, 114 USPQ2d 1349, 1356, 1357 (Fed. Cir. 2015). Therefore, the specification does not provide a disclosure of the algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention under 35 U.S.C. 112(a). Claims 2 – 5, 7 – 11, 13 – 17, and 18 – 19 fall accordingly. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 5, 7 – 8, 12 – 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Varda (US 10,805,323 B1) in view of Geuk et al (KR 20110087826 A), hereinafter Geuk. Regarding Claim 1, Varda discloses “A method of identifying malicious activity in a plurality of sequences of computer instructions, comprising: identifying a plurality of sequences of computer instructions of interest” (Varda: Col. 5, ln. 1 – 4 describes how a piece of third-party code is executed in some sandboxed process. Col. 12, ln. 26 – 27 discloses that multiple other code pieces (i.e., computer instructions) are executed by a process in multiple other isolated execution environments); “determining whether each of the groups has at least one executed sequence of computer instructions that is likely malicious” (Varda: Col. 16, ln. 61 – 63 describes how behavior of the executed third-party code in each execution environment is indicative of a potential speculative execution attack (i.e., malicious instructions included therein in order to perform the attack)); “and upon determining whether each of the groups has at least one executed sequence of computer instructions that is likely malicious, isolating the groups having the sequence of computer instructions that is likely malicious” (Varda: Col. 4, ln. 62 - 67 describes a process where an attack is slowed down, which allows the system to identify malicious behavior, and, if identified, isolates the execution of that code piece in a separate, sandboxed process). Varda discloses the identification of computer instructions of interest, determination of whether they are malicious, and isolation of the groups of instructions of interest, but fails to expressly disclose “assigning the plurality of sequences of computer instructions into two or more groups”. However, analogous art from the same field of endeavor, Geuk, teaches this: [0041] describes how profiles of suspected code (i.e., sequences of computer instructions) are automatically grouped based on their profile and exhibited behavior. An attachment contains multiple sequences of computer instructions; “executing a virtual machine sandbox for each of the two or more groups” (Geuk: [0016] describes how attachments in emails containing code (i.e., sequences of computer instructions) are executed on a virtual machine prior to execution on an “actual” system); “executing each of the plurality of sequences of computer instructions in the virtual machine sandbox into which the sequence of computer instructions has been assigned” (Geuk: [0027] describes how an attachment from an email is extracted, and then subsequently executed on a virtual machine to detect whether it is malware). Therefore, based on Varda in view of Geuk, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Geuk to the system of Varda in order to execute attachments in a virtual machine first as opposed to an actual machine for strengthening the security level of the system (Geuk, ln. 90 – 92). Regarding Claim 2, Varda further discloses “and determining whether each of the different groups has at least one executed sequence of computer instructions that is likely malicious”: Col. 16, ln. 61 – 63 describes how behavior of the executed third-party code in each execution environment is indicative of a potential speculative execution attack (i.e., malicious instructions included therein in order to perform the attack)). Varda discloses the determination of malicious groups of instructions, but fails to expressly disclose “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 1, further comprising: assigning the plurality of sequences of computer instructions into two or more different groups”. However, Geuk teaches this: [0041] describes how profiles of suspected code (i.e., sequences of computer instructions) are automatically grouped based on their profile and exhibited behavior. An attachment contains multiple sequences of computer instructions; “executing a virtual machine sandbox for each of the two or more different groups: [0016] describes how attachments in emails containing code (i.e., sequences of computer instructions) are executed on a virtual machine prior to execution on an “actual” system); “executing each of the plurality of sequences of computer instructions in the virtual machine sandbox into which the sequence of computer instructions has been assigned”: [0027] describes how an attachment from an email is extracted, and then subsequently executed on a virtual machine to detect whether it is malware. Therefore, based on Varda in view of Geuk, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of execution of computer instructions in virtual machines of Geuk to the system of Varda in order to create an easier process of detection of malicious activity on a large number of files, as time required for system restoration is significantly reduced as compared to an actual system that builds the system, executes the code, and determines whether there is malicious activity afoot (Geuk, ln. 99 – 102). Regarding Claim 3 Geuk further discloses “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 2, further comprising repeating the assigning the plurality of sequences of computer instructions, the executing a virtual machine sandbox, the executing each of the plurality of sequences of computer instructions, and the determining whether each of the different groups has at least one executed sequence of computer instructions that is likely malicious using different groupings of the plurality of sequences of computer instructions until it is possible to determine whether each of the plurality of sequences of computer instructions is likely malicious”: [0018] describes a process where, if the attachment is determined to be malware, a new virtual machine is initialized, and new attachments are selected and monitored again. The process is thus performed a second (or subsequent) time in order to reduce time for detecting malware even in large volumes of emails. Varda discloses the determination of whether the executed sequence of computer instructions is indicative of malice or not. Therefore, based on Varda in view of Geuk, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of repetition of execution of computer instructions in virtual machines of Geuk to the system of Varda in order to significantly reduce the time required for system restoration and ease the detection of malware, even in large amounts (Geuk, [0018]). Regarding Claim 4, Geuk further discloses “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 3, further comprising using a group testing algorithm to determine whether each of the plurality of sequences of computer instructions is likely malicious”: [0052] describes a method of group testing where a tree structure is built and meaningful groups are extracted from the structure. A difference in distance (or written as “inconsistency”) is calculated for each of the groups. The inconsistency measure is used to separate the tree into more separate groups. [0051] describes how the similarity is based on behavioral fingerprints of the code. Therefore, based on Varda in view of Geuk, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of execution of computer instructions in virtual machines of Geuk to the system of Varda in order to meaningfully group similar malware behavior profiles in order to efficiently classify similar malware and assign meaningful labels to each classification (Geuk, [0040]). Regarding Claim 5, Geuk further discloses “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 1, further comprising using group testing to determine whether each of the plurality of sequences of computer instructions is likely malicious”: [0052] describes a method of group testing where a tree structure is built and meaningful groups are extracted from the structure. A difference in distance (or written as “inconsistency”) is calculated for each of the groups. The inconsistency measure is used to separate the tree into more separate groups. [0051] describes how the similarity is based on behavioral fingerprints of the code. Therefore, based on Varda in view of Geuk, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of execution of computer instructions in virtual machines of Geuk to the system of Varda in order to meaningfully group similar malware behavior profiles in order to efficiently classify similar malware and assign meaningful labels to each classification (Geuk, [0040]). Regarding Claim 7, Varda further discloses “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 1, wherein determining whether each of the groups has at least one executed sequence of computer instructions that is likely malicious comprises analyzing a behavior of the sequences of computer instructions assigned to each of the virtual machine sandboxes” (Varda: Col. 16, ln. 61 – 63 discloses a determination being made based on behavior analysis of the piece of code. Geuk describes the use of a sandbox virtual machine throughout). Regarding Claim 8, Varda further discloses “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 1, further comprising identifying sequences of the plurality of the computer instruction sequences determined likely to be malicious to a user” (Varda: Col. 5, ln. 1 – 8 describes the monitoring and determination of a piece of code exhibiting behaviors of a speculative execution attack. These types of attacks can be dangerous to a user as they target a CPU and can result in leakage of user data). System claims 12 – 16 and 18 are drawn to the system of using the corresponding method claimed in claims 1 – 5 and 7. Therefore, system claims 12 – 16 and 18 correspond to method claims 1 – 5 and 7, and are rejected for the same reasons of obviousness as recited above. Method claim 20 is drawn to the use of the method of corresponding system claim 12. Therefore, method claim 20 corresponds to system claim 12, and is rejected for the same reasons of obviousness as stated above. Claim(s) 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Varda (US 10806323 B1) in view of Geuk et al (KR 20110087826 A), hereinafter Geuk, in further view of Kasper et al (US 20200104507 A1), hereinafter Kasper. Regarding Claim 6, the combination of Varda and Geuk discloses the above subject matter content, but fails to expressly disclose “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 5, wherein using group testing comprises using a nested group testing algorithm”. However, analogous art from the same field of endeavor, Kasper, teaches this: [0047] describes how a depth test is performed progressively through nested levels of parameter areas. These parameter areas store parameter lists (i.e., code values) which contain expected values and pointers to said parameter areas [0024]. The depth test is performed to determine whether each parameter area contains expected values. Malicious parameters may be used in these parameter lists to expose potential security vulnerabilities [0042]. Therefore, based on Varda in view of Geuk, and in further view of Kasper, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Kasper to the system of Varda and Geuk in order to conduct testing profiles for vulnerability testing of authorized services and code (Kasper, [0006]). System claim 17 is drawn to the use of the system of corresponding method claim 6. Therefore, system claim 17 corresponds to method claim 6, and is rejected for the same reasons of obviousness as stated above. Claim(s) 9 – 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Varda (US 10806323 B1) in view of Geuk et al (KR 20110087826 A), hereinafter Geuk, in further view of Dhankha et al (US 20200026851 A1), hereinafter Dhankha. Regarding Claim 9, the combination of Varda and Geuk discloses the above subject matter content, but fail to expressly disclose “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 1, further comprising selecting the plurality of sequences of computer instructions of interest using static analysis”. However, analogous art from the same field of endeavor, Dhankha, teaches this: [0027] describes how, following a dynamic malware analysis of a software program (which contains computer code), the system may perform a static malware analysis based on various identifications of the software. Following the analysis, the code pieces containing malware are then compiled into a report. Therefore, based on Varda in view of Geuk, and in further view of Dhankha, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Dhankha to the system of Varda and Geuk in order to perform this analysis to generate a malware report, which results in one or more actions being taken on the malware results (Dhankha, [0003]). Regarding Claim 10, Varda further discloses “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 9, further comprising adjusting time spent executing each of the plurality of sequences of computer instructions in the virtual machine sandbox into which the sequence of computer instructions has been assigned based on the static analysis” (Varda: Col. 4, ln. 41 – 43 describes performing tasks at different times (i.e., concurrently or sequentially) in order to adjust the implicit “timers” set by running the code. Geuk describes the use of a sandbox virtual machine throughout, and Dhankha describes the use of static malware analysis and malware reports of the chosen code pieces exhibiting malicious behavior). System claim 19 is drawn to the use of the system of corresponding method claim 10. Therefore, system claim 19 corresponds to method claim 10, and is rejected for the same reasons of obviousness as stated above. Claim(s) 9 – 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Varda (US 10806323 B1) in view of Geuk et al (KR 20110087826 A), hereinafter Geuk, in further view of Dhankha et al (US 20200026851 A1), hereinafter Dhankha, in further view of Seifert et al (US20210141897A1), hereinafter Seifert. Regarding Claim 11, the combination of Varda, Geuk, and Dhankha discloses the above subject matter content, but fails to expressly disclose “The method of identifying malicious activity in a plurality of sequences of computer instructions of claim 9, further comprising adjusting assigning the plurality of sequences of computer instructions into two or more groups based on the static analysis, such that sequences of computer instructions determined more likely to be malicious using static analysis are assigned to smaller groups of sequences of computer instructions than sequences of computer instructions determined more likely to be benign”. However, analogous art from the same field of endeavor, Seifert, teaches this: [0046] describes grouping content (i.e., computer instructions) that are similar. Malicious content is grouped with other malicious content; however, the malicious content is only grouped with other families of similar malicious content, whereas all benign content is all grouped together. Therefore, based on Varda in view of Geuk, in further view of Dhankha, and in further view of Seifert, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Seifert to the system of Varda, Geuk, and Dhankha in order to employ similarity scores to enable the detection of malicious content (Seifert, [0004]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The references present in PTO-892 are cited to further demonstrate the state of the art with respect to malware detection using various testing algorithms. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexandria C Rodriguez whose telephone number is (703)756-1827. The examiner can normally be reached 08:00 - 16:00 Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jorge L Ortiz-Criado can be reached on (571)272-7624. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDRIA C RODRIGUEZ/Examiner, Art Unit 2496 /JORGE L ORTIZ CRIADO/Supervisory Patent Examiner, Art Unit 2496
Read full office action

Prosecution Timeline

Apr 25, 2023
Application Filed
Apr 21, 2025
Non-Final Rejection — §103, §112
Jul 31, 2025
Response Filed
Mar 15, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 7983414
PROTECTED CRYPTOGRAPHIC CALCULATION
2y 5m to grant Granted Jul 19, 2011
Patent 7984512
INTEGRATING SECURITY BY OBSCURITY WITH ACCESS CONTROL LISTS
2y 5m to grant Granted Jul 19, 2011
Patent 7965844
SYSTEM AND METHOD FOR PROCESSING USER DATA IN AN ENCRYPTION PIPELINE
2y 5m to grant Granted Jun 21, 2011
Patent 7954164
METHOD OF COPY DETECTION AND PROTECTION USING NON-STANDARD TOC ENTRIES
2y 5m to grant Granted May 31, 2011
Patent 7954156
METHOD TO ENHANCE PLATFORM FIRMWARE SECURITY FOR LOGICAL PARTITION DATA PROCESSING SYSTEMS BY DYNAMIC RESTRICTION OF AVAILABLE EXTERNAL INTERFACES
2y 5m to grant Granted May 31, 2011
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
37%
Grant Probability
73%
With Interview (+35.5%)
5y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month