Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on October 17, 2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant's arguments filed October 17, 2025 have been fully considered but they are not persuasive.
In pages 1-2 of the remarks, Applicant states that objections made to the Specification in paragraphs [0028] and [0054] be removed, as the language for a removal of “detection engine 126 may detect malware in a backup such as a locked copy 124” was added in the amendment filed on January 31, 2025, but was removed on June 5, 2025. Applicant states that a locked copy 124 and a detection engine 126 are in Fig. 1. Furthermore, the trademarked terms (“POWERPROTECT™”, “CYBERRECEOVERY™”, “DELL™ EMC™ DATADOMAIN™”) in paragraph [0054] was restored to paragraph [0054], and as a result, Amendment states that the removal of the Specification objections be considered.
Examiner removes the objections of the Specification for paragraphs [0028] and [0054].
Applicant states that the rejected claims 1, 7, 9-11, 17, and 20 were previously rejected under 112(a), and claims 7, 9, 10, 17, and 20 were rejected under 112(b) as being indefinite for failing to point out and distinctly claim the subject matter which Applicant regards as the invention. While Applicant disagrees with the invention, claims have been amended to expedite prosecution, with no rejections under U.S.C. 112 clarified, and requests that the rejections under U.S.C. 112 be withdrawn.
Examiner states that the rejections of 112(b) are maintained, as claim 7, lines 4-5 (“so that the malware continues to execute its programmed behaviors”) still recites an intended result with the recitations of “so that the malware continues” with MPEP 2173.05(g), “Functional Language” citing the use of language for an intended result does not provide a clear cut indication of scope, as a person of ordinary skill in the art would not understand if the claim limitations reciting an intended result is an aspect of the invention or not. For similar reasons, claim 9, lines 3-4 (“such that the malware interacts with the false data as if it were the original data”) limitations are being rejected for reciting similar intended results. Claims 17 and 20 recite similar limitations to claims 7 and 9, respectively, and are rejected for the reasons above regarding intended result. Furthermore, Examiner states that the term “false data” is being rejected under 112(b), found in claim 9, line 2, and claim 10, line 2, as the term is indefinite and a person of ordinary skill in the art would not understand what ‘false data’ is defined as, or how it pertains to the invention by inducing malware to interact with a recovered production system. Applicant has also not pointed out where the term is defined or clarified in the Specification. Similar issues exist under 112(b) for use of the term ‘real data’ in claim 9, line 3, as the Applicant has also not pointed out where support for the term ‘real data’ is found in the Specification, and it is unknown how ‘real data’ differs from ‘false data’ in the invention. As a result, Examiner maintains the rejections of claims 7, 9, and 10 are rejected under 112(b) for the reasons above, and similarly, claims 17 and 20 are rejected for reciting similar claim limitations to claims 7 and 9, respectively. In regards to the rejections under 112(a), Applicant merely amends the claims 1, 7, 9-11, 17, and 20 to traverse the rejections of 112(a), Applicant has not pointed out where the amended claims 1, 7, 9-11, 17, and 20 are supported, nor does there appear to be a written description of the claim limitation of ‘false data structured with naming and metadata patterns expected by the malware based on the recovered production system’ for claim 10 in the application as filed. Examiner maintains the rejections for claims 1, 7, 9-11, 17, and 20 under 112(a), as the claims limitations lack support in the Specification, and Applicant does not point out where in the Specification support can be found for the amended limitations.
Applicant states that the Office Action rejected claims 11-20 under U.S.C. 102(a)(1) as being anticipated by Gupta (US 10885191), and claims 1-10 were rejected under U.S.C. 103 as being unpatentable over Gupta in view of Huang (US 20200210575). Applicant states that Gupta does not disclose or suggest deploying a single infected backup into multiple sandbox environments started from a common baseline and then executing different scenarios in parallel using different agents to provoke different malware behaviors in each sandbox environment. It further states that Gupta does not teach instantiating multiple sandboxed recovered production systems in parallel from the same infected backup and varying the scenarios in each environment, nor does Gupta describe analyzing differences between outputs from multiple environments to generate insights, with Gupta only using sandboxes individually to detect malware attributes and storing the attributes in a database, with no capabilities of parallel scenarios to derive comparative insights. Finally, Applicant states that Gupta’s system lacks an environment-embedded learning engine in each recovered production system, and that Examiner relies on Huang has a learning engine, but does not use the learning engine in multiple sandbox environments, nor does it suggest executing different agent scenarios in each environment for comparative analysis.
Examiner disagrees with the Applicant, and states that Gupta contains simulation modules 210 inside of malware detonation modules 145-a in Fig. 1, as stated in [Col. 6, line 66-Col. 7, line 3], and in [Col. 9, lines 55-57], a simulation module 210 containing malware execution module 415 (Fig. 4), such as with 2 devices and a server, including a target device, which the malware intends to attack, which is a first device. This corresponds to an infected backup being provided to multiple sandbox environments, as at least a second device and a server can execute a simulation of a first device. Furthermore, the claims of the Applicant do not recite multiple sandbox environments being run simultaneously, not does the Applicant state where in the Specification the invention suggests or states simultaneous sandboxes. Gupta also discloses analyzing differences between the outputs generated from each of the working environments in [Col. 10, lines 21-25] where after simulated environments are performed, a database is used to discover malware attributes from multiple simulated environments, and in [Col. 10, lines 19-25] a malware attributes identification module 505 (Fig. 5) uses attributes to analyze devices to figure out how malware attributes were discovered through different simulated environments. Finally, the limitation of a learning engine in each recovered production system is stated by Huang as a learning engine 110 (Fig. 1) as part of an adversarial malware detector 100, and in combination with other aspects of Gupta’s system including an analysis system of a simulated environment creating signatures of malware in [Col. 4, lines 16-21], corresponds to an environment-embedded learning engine of the Applicant. Furthermore, executing different agent scenarios in each environment for comparative analysis is disclosed in [Col. 10, lines 21-25] of Gupta, where scenarios are different from one another with performed in a simulated environment, and a database includes malware attributes from other environments as well for comparison between those environments. Examiner rejects claims 1-10 under U.S.C. 103 over Gupta in view of Huang for the reasons stated above. Claims 11-20 disclose similar rejections to claims 1-10 disclosed above, and are rejected for similar reasons.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7, 9-10, 17, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The phrase ‘so that the malware continues to execute its programmed behaviors’ in claim 7, lines 3-4 are considered exemplary claim language. As stated in MPEP 2173.05(g), “Functional Limitations”, use of language in the claims to get an intended result does not “provide a clear cut indication of scope because it imposed no structural limits”, and in the case of claim 7, having malware operate without detecting that it has been isolated or observed is an intended result, and can cause confusion if the intended result is an aspect of the invention itself or not.
The term “false data” in claim 9, line 2 is a relative term which renders the claim indefinite. The term “false data” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term ‘false data’ is not described anywhere in the Specification of the Applicant, and it is not known how malware views the false data that mimics real data in various properties, such as naming, metadata, and structure, as authentic. Furthermore, the term ‘real data’ is indefinite and not defined by the Specification of the Applicant in paragraph [0038], as what makes data appear ‘real’ to a malware remains unclear, and how it distinguishes from ‘false data’ that malware detects in a recovered production system.
Claim 10 is rejected for similar reasons as claim 9, as the term ‘false data’ also appears in line 2, and it is not known what constitutes as ‘real’ with regards to false data being ‘structured with naming and metadata patterns expected by the malware’ to trick malware with false data appearing as real to the malware.
Claim 17 contains similar claim limitations as in claim 7 described above with regards to intended result in the claim limitations, and as a result, claim 17 is also rejected for similar reasons as claim 7.
Claim 20 is rejected for similar reasons as claim 9, as the term ‘authentic’ also appears in line 2, and it is not known what constitutes as authentic with regards to ‘naming, metadata, and structure’ to trick malware with decoy data.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
In amended claim 1, line 4, the term ‘recovering the infected backup to a plurality of sandboxed working environments provided by a forensic infrastructure, wherein each of the working environments includes a recovered production system that is generated from the same infected backup and includes applications, data, and a learning engine’ does not sufficiently describe how this is achieved in the specification. Paragraph [0031] describes an infected snapshot may be recovered to a working environment, as a recovered production system. As claimed, it is stated that recovering the backup to an already recovered production system, which is not described in the specification as originally filed.
Furthermore, in amended claim 1, in line 4, the phrase ‘…a plurality of sandbox working environments…’ has no support in the Specification, as although paragraph [0015] mentions multiple working environments, such as sandboxes, the Specification lacks sufficient disclosure of the technical steps or structure needed to ensure reproducible identical starting states and safe containment/emulation. Without such enabling disclosure, a person of ordinary skill in the art would face undue experimentation and the specification fails to show possession.
Furthermore, in lines 8-9, the term ‘executing, by a different agent within each working environment, a different scenario on the corresponding recovered production system, each scenario comprising a sequence of predefined actions on the corresponding production system to provoke behavior from the malware’ raises issues, as the specification does not describe how this limitation is performed, as paragraph [0042], scenarios are performed by agents 310 and 318 of Fig. 3, which seems to be in working environments, but no process is stated as to how this is performed. This also raises the issue of new matter, as these limitations were previously not described in the Specification of the Applicant or the claims.
Next, in line 11, the phrase ‘collecting, by the forensic engine, outputs generated by the learning engine in each of the working environments, the outputs comprising information about file access, network activity, and process behavior observed during execution of the scenario’ is stated in the amended claim 1, but the Specification of the Applicant does not provide enough information as to how the limitation is performed. Although paragraph [0032] in the Specification of the Applicant describes that insights are learned by or gleaned from an output of the learning engine, and paragraph [0045] stating that insights can be gained from outputs 312 and 320, it is never made clear that outputs themselves can be collected by the invention of the Applicant. Furthermore, the “learning engine” is configured to generate outputs used to derive malware operational characteristics. The specification, however, fails to provide adequate descriptive support showing that the applicant was in possession of the claimed ‘learning engine’ across the full scope of the claim. The specification provides high‑level functional statements that a learning engine may ‘monitor’ or ‘learn’ (see Spec [0032]), but does not disclose corresponding structure, algorithms, modules, or example implementations that perform the recited learning functions. Under Ariad, a mere statement of function without structural or algorithmic disclosure is insufficient to satisfy the written description requirement. Accordingly, claim 1 is rejected under 35 U.S.C. 112(a) for lack of written description.”
Furthermore, in lines 11-12, the phrase ‘outputs comprising information about file access, network activity, and process behavior observed during execution of the scenario’ is not sufficiently described in the Specification of the Applicant. Similar to the phrase ‘collecting, by the forensic engine, outputs generated by the learning engine in each of the working environments’ in line 11 of claim 1, the lack of information regarding outputs comprising telemetry, file access patterns, and other aspects raises issues as the Specification does not include these specific properties of the outputs, as stated in paragraph [0033] of the Applicant.
Next, in lines 13-15, the phrase ‘analyzing differences between the outputs generated from each of the working environments to derive insights regarding operational characteristics of the malware’ is not sufficiently described in the Specification of the Applicant. The specification does not indicate that the inventors had possession of the details of particular software or instructions that would implement the analyzing. In addition, there is no description for analyzing differences between the outputs generated from each of the working environments to derive insights, since the Specification simply states “insights may also be generated 408 from the collective insights (e.g., differences...” in paragraph [0047] and this is the only section of the Spec that mentions such functional capability. The specification lacks structural or algorithmic support that demonstrates possession of the claimed analysis and the particular ‘insights’ across the claimed scope. Mere assertions that outputs are analyzed are insufficient.
Finally, in lines 16-17, the phrase ‘implementing at least one of the insights in a production system to detect, or prevent the malware’ is not sufficiently supported in the Specification of the Applicant. Paragraph [0045] describes how insights may be allowed to more quickly detect malware, and prevent malware. Claim 1 requires a ‘learning engine’ that analyzes sandbox outputs to derive insights sufficient to detect or prevent malware and requires implementing such insights in production. The specification provides high‑level repeated descriptions but lacks detail procedures, representative parameters, example detection rules, conversion of insights into remediation actions.
In claims 2-10, the dependent claims inherit the deficiencies of the parent claim, which is claim 1, and the dependent claims are therefore, also rejected under 35 U.S.C. 112(a).
In claim 7, lines 1-2, the phrase ‘working environments is configured with an emulated communication link that allows the malware to communicate with a simulated malware host system’ does not provide enough information as to how the limitation is performed in the Specification of the Applicant.
Furthermore, the phrase ‘by providing expected communication protocols and responses that match those of the original production environment, so that the malware continues to execute its programmed behavior’ in lines 3-4 of claim 7 is not supported anywhere in the Specification of the Applicant. While paragraph [0019] states that malware may be unaware that it has been detected, it does not provide support for malware also being isolated, and does not outline the process for malware continuing to execute its programmed behavior with expected communication protocols and responses that match those of the original production environment behavior.
However, this is simply a recitation of a desired result without providing how this result is achieved. Nowhere it is described how (1) the Applicant implement the so called communication a communication is implemented and (2) no explanation is found as to how the Applicant makes the malware “continues to execute its programmed behavior”.
As in MPEP 2161.01 (I), "The description requirement of the patent statute requires a description of an invention, not an indication of a result that one might achieve if one made that invention."). It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See, e.g., Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683, 114 USPQ2d 1349, 1356, 1357 (Fed. Cir. 2015).
Paragraphs [0018] and [0035] simply states that malware protection operations in working environments are configured to prevent malware from discovering it has been detected, without a procedure or an algorithm as to how this is achieved.
This also raises the issue of new matter, as these limitations were previously not described in the Specification of the Applicant or the claims.
In claim 9, lines 2-3, the phrase ‘false data having a structure, file naming, and metadata that correspond to […] real data, such that the malware interacts with the false data as if it were the original data’ is described, but has insufficient support in the specification. Paragraphs [0034]-[0035] describe false data, which has been amended as ‘false data’ in the claims by the Applicant, that appears real to a malware, but no explanation is given as to how this is achieved, and no process is described to ensure that someone can make and/or use the invention and understand the process, as metadata does not appear anywhere in the Specification of the invention, and how a recovered system comprising false data has the malware interact with false data as ‘real data’ remains unclear. This also raises the issue of new matter, as these limitations were previously not described in the Specification of the Applicant or the claims.
In claim 10, it is rejected for similar reasons as to claim 9, with the phrase ‘false data structured with naming and metadata patterns expected by the malware based on the recovered production system’ in lines 2-3 appearing as well, and is also rejected under 35 U.S.C. 112(a), as the process is not described, nor how it is performed in the invention of the Applicant, and contains similar issues to claims 9 above. This also raises the issue of new matter, as these limitations were previously not described in the Specification of the Applicant or the claims.
In claim 11, it is rejected for similar reasons as to claim 1, as the limitations are similar to those found in the aforementioned claim 1 also appears in this claim, and is also rejected under 35 U.S.C. 112(a).
In claims 12-20, the dependent claims inherit the deficiencies of the parent claim, which is claim 11, and the dependent claims are therefore, also rejected under 35 U.S.C. 112(a).
In claim 17, in lines 1-3, it is rejected for similar reasons as to claim 7, as the limitations are similar to those found in the aforementioned claim 7 also appears in this claim, and is also rejected under 35 U.S.C. 112(a). This also raises the issue of new matter, as these limitations were previously not described in the Specification of the Applicant or the claims.
In claim 20, in line 3, it is rejected for similar reasons as to claim 9, as the limitations are similar to those found in the aforementioned claim 7 also appears in this claim, and is also rejected under 35 U.S.C. 112(a). This also raises the issue of new matter, as these limitations were previously not described in the Specification of the Applicant or the claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta (US 10885191 B1) in view of Huang et al. (US 20200210575 A1), hereinafter Huang.
Regarding claim 1, Gupta discloses ‘a method, comprising: receiving an infected backup of a production system at a forensic engine, the infected backup including a malware’ ([Col. 3, lines 39-47] A snapshot includes artefacts that malware looks for, and can include a variety of information, and according to [Col. 9, lines 1-19], including the files in a file system, BIOS version, storage information, and in conjunction, corresponds to the infected backup of the Applicant. [Col. 7, lines 12-16] Simulation module 210 in Fig. 2 simulates the first device in a controlled environment, and the artefacts are used to simulate the first device, with the simulation module corresponding to the forensic engine of the Applicant.);
‘recovering the infected backup to a plurality of sandboxed working environments provided by a forensic infrastructure, wherein each of the working environments includes a recovered production system that is generated from the same infected backup and includes applications, data, and a[n] engine’ ([Col. 3, lines 47-59] Fig. 1, software agent 150 may send a snapshot to a predetermined computer system remote to the targeted machine, with a sandbox simulating the environment of a targeted machine, with the malware included. When a software agent sends a snapshot to a predetermined computer system, it is performed after detecting malware, as stated by [Col. 3, lines 47-50] of Gupta. [Col. 6, lines 66-Col. 7, lines 3] Fig. 2, malware detonation module 145-a can include a simulation module 210, and as shown in Fig. 1, each of the devices and servers contain a malware detonation module, corresponding to a plurality of sandboxed working environments. As stated in [Col. 9, lines 55-57], the simulation module 210 contains a malware execution module 415 to allow malware to execute within the simulated environment. [Col. 12, lines 27-29] Fig. 1 and Fig. 7, malware detonation modules 145-1-3 can perform the method 700, which can be interpreted as the three devices and at least a second device 170 and server 110 working and simulating the first device 105. [Col. 9, lines 3-19] Information that is included as the artefacts of a snapshot can include a variety of information, but the files of a file system used by the first device, which can include applications and data, which is equivalent to the recovered production system, and in conjunction with [Col. 3, lines 42-47] in which a sandbox or other simulation environment contains access to the data from the snapshot after detecting malware, corresponding to a recovered production system generated from the same infected backup. [Col. 4, lines 16-21] Analysis system of the simulated environment creates signatures of malware based on analysis, which corresponds to an engine.);
‘executing, by a different agent within each working environment, a different scenario on the corresponding recovered production system, each scenario comprising a sequence of predefined actions on the corresponding recovered production system to provoke behavior from the malware’ ([Col. 4, lines 9-13] Sandbox learns about attributes regarding how the malware runs and accesses different resources on a computer, including the registries and processes, to which malware running corresponds to operational characteristics, and the attributes of the malware being learned corresponds to insights of the Applicant. [Col. 10, lines 21-25] A database includes malware attributes from the simulated environment from other simulated environments or malware attributes discovered, corresponding to outputs of each of the working environments individually and collectively of the Applicant, and each of the scenarios is different from one another. [Col. 12, lines 25-27] "FIG. 7 is a flow diagram illustrating one embodiment of a computer-implemented method 700 for using environment context information to detonate malware", which shows an order that must be followed by the software agent in an environment in order to engage or provoke behavior from the malware, corresponding to sequence of predetermined actions.);
‘collecting outputs generated by the engine in each of the working environments, the outputs comprising information about file access patterns, and process activity logs’ ([Col. 10, lines 21-25] A database includes malware attributes from the simulated environment from other simulated environments or malware attributes discovered, corresponding to outputs of each of the working environments individually and collectively of the Applicant, and each of the scenarios is distinct from one another. [Col. 10, lines 2-8] Malware attributes include a destination within a simulated environment of malware, corresponding to telemetry logs, a sequence of tasks performed, corresponding to file access patterns, tasks performed by malware, corresponding to process activity logs of the Applicant, respectively.);
‘analyzing differences between the outputs generated from each of the working environments to derive insights regarding operational characteristics of the malware’ ([Col. 4, lines 9-13] Sandbox learns about attributes regarding how the malware runs and accesses different resources on a computer, including the registries and processes, to which malware running corresponds to generating insights regarding operational characteristics of the malware. [Col. 10, lines 21-25] A database includes malware attributes from the simulated environment from other simulated environments or malware attributes discovered, corresponding to outputs generated from each of the working environments individually and collectively of the Applicant, and each of the scenarios is different from one another, which also corresponds to analyzing differences between the collected information from the plurality of working environments when a malware attribute identification module 505 in Fig. 5 can use attributes to analyze devices and figure out how the malware attributes were discovered through different ways in other simulated environments, as stated in [Col. 10, lines 19-25]. In [Col. 6, lines 45-63] of Gupta, the database 120 is described as containing the malware signatures 160 which represent one or more attributes of the malware.);
‘and implementing at least one of the insights in a production system to detect, or prevent the malware’ ([Col. 12, lines 25-27] Fig. 7, method 700 is configured to detonate malware in a simulated environment, and use the attributes to then perform block 730, performing a security action in a first device after the method 700 is performed. Performing a security action to detonate malware corresponds to preventing malware.).
Gupta does not appear to disclose, but Huang teaches the limitations of ‘production system including… learning engine’ ([0022] Adversarial malware detector 100 contains a learning engine 110 as part of the detector. When combining the learning engine of Huang in combination with the other aspects of the production system in Gupta, it teaches the limitation of a production system containing a learning engine, as stated by the Applicant.);
‘outputs comprising… network activity’ ([0020] Features of a program, that can potentially be malware according to Huang, can include URL calls, corresponding to network behavior of the Applicant.).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Gupta and Huang before them, to include Huang’s ‘production system including… learning engine’ and ‘outputs comprising… network behavior’ in Gupta’s system performing ‘a method, comprising: receiving an infected backup of a production system at a forensic engine, the infected backup including a malware’. One would have been motivated to make such a combination to enhance security by utilizing a machine learning engine to learn features to output a likelihood to determine that a program is benign, as taught by Huang [0022], and to understand behavior of malware’s network behavior to understand its patterns and how it avoids detection, as taught by Huang [0016].
Gupta does not appear to disclose, but Huang teaches the limitation of “collect, from the learning engine, scenario-specific information indicative of malware behavior” ([0021] Fig. 1, machine learning engine 110 classifies a program 102 as either benign 124 or malware 126 based on a feature representation 112 that takes into account features of a program that were extracted by feature extractor 108.)
Therefore, one of ordinary skill in the art would have been capable of applying this known method of “collect, from the learning engine, scenario-specific information indicative of malware behavior” in a method for receiving an infected backup and recovering the backup to a plurality of sandbox working environments and the results would have been predictable to one of ordinary skill in the art. The one of ordinary skill in the art would have been motivated to observe features to classify the program, such as behaviors, actions, function and API calls, data accesses, URL calls, and other such features of a program to observe if a program is classified as malware (Huang [0020]).
Regarding claim 2, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein at least one of the scenarios is a predetermined set of actions performed by an agent’ ([Col. 12, lines 33-37] Fig. 7, block 705, a first device may be monitored using a software agent, with the steps in Fig. 7 corresponding to a predetermined set of actions of the Applicant.).
Regarding claim 3, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein at least one of the scenarios is a rule-based scenario or a rule-based artificial intelligence scenario or a machine learning model scenario performed by an agent’ ([Col. 12, lines 25-27] Fig. 7 shows an order that must be followed by the software agent in order to remove and learn about the malware, which corresponds to a rule-based scenario of the Applicant.).
Regarding claim 4, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein the each of the working environments are executed in a sandbox’ ([Col. 3, lines 56-59] Sandbox simulates the environment from a first device, or a targeted device, to detect and present information of the malware and learn characteristics about the malware, as stated in [Col. 7, lines 22-24]).
Regarding claim 5, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein at least some of the working environments allow communication between the malware and a malware host system’ ([Col. 7, lines 21-28] Snapshots of the targeted device performing on a remote device allow for malware to be run in a sandbox environment of a remote device, which corresponds to a malware host system of the Applicant.).
Regarding claim 6, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein the infected backup is a most recent point-in-time of the production system’ ([Col. 7, lines 21] The snapshot obtained and run in the remote device is the most recent point-in-time copy of the production system, as stated by the Applicant and the prior art.).
Regarding claim 7, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein at least one of the working environments is configured with an emulated communication link that allows the malware to communicate with a simulated malware host system, such that the malware continues operating without detecting that it has been isolated or observed’ ([Col. 8, lines 27-35] Fig. 4, acquisition module 405, inside a simulation module 210, itself inside a malware detonation module 145, can obtain artefacts from a first device to a second device. Second device can simulate using a controlled environment to simulate a first device, corresponding to a simulated malware host system of the Applicant. In section [Col. 9, lines 55-65], malware execution module 415 allows malware to execute when it detects artefacts of a simulated first device, where malware believes it is on the first device, effectively communicating with a simulated malware host system, corresponds to an emulated communication link, and at least one of the working environments of the Applicant.).
Regarding claim 8, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘further comprising detecting the malware in the production system or in an existing backup of the production system’ ([Col. 3, lines 47-50] A software agent detecting malware on a target machine corresponds to detecting malware in a production system of the Applicant.).
Regarding claim 9, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘wherein the data in the recovered production system comprises decoy data that mimics real data in structure, naming, and metadata, and appears authentic to the malware’ ([Col. 10, line 59-Col. 11, line 8] Artefacts can comprise files and directories of a file system, corresponding to structure and naming of real data, as files can contain names as well. Metadata of the file system is also included, corresponding to metadata of the Applicant. Artefacts correspond to decoy data of the Applicant.).
Regarding claim 10, Gupta discloses the method of claim 1 as recited above. Gupta also discloses the limitation of ‘further comprising providing each of the scenarios with false data structured with naming and metadata patterns expected by the malware based on the recovered production system’ ([Col. 3, lines 59-62] Malware detects the artefacts to induce one or more processes of the malware, believing to be executed on a targeted machine, that being a first device 105 of Fig. 1. The artefacts correspond to the false data that appears to be real to the malware.).
Regarding claim 11, Gupta in view of Huang teach similar limitations present in independent claim 1 above. Gupta also discloses a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations ([Col. 2, lines 28-35] A non-transitory storage medium includes instructions to be executed by one or more processors, to perform the method of the prior art of Gupta.)
monitoring each working environment to collect scenario-specific information indicative of malware behavior ([Col. 4, lines 9-22] Sandbox can learn attributes of the malware, and a sandbox runs a simulated environment of the first device, which learns attributes of malware by monitoring said malware. Attributes of malware correspond to collecting scenario-specific information indicative of malware behavior.);
Regarding claim 12, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 2 above.
Regarding claim 13, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 3 above.
Regarding claim 14, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 4 above.
Regarding claim 15, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 5 above.
Regarding claim 16, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 6 above.
Regarding claim 17, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 7 above.
Regarding claim 18, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 8 above.
Regarding claim 19, Gupta discloses the non-transitory storage medium of claim 11 as recited above. Gupta also discloses the limitation of ‘further comprising generating the infected backup from the production system when the malware is detected’ ([Col. 3, lines 47-50] Generating of an infected snapshot occurs after the detection of malware in a production system, with the infected snapshot corresponding to the infected backup of the Applicant.).
Regarding claim 20, Gupta in view of Huang teach the non-transitory storage medium of claim 11 as recited above. Gupta also discloses similar limitations present in claim 9 above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOMMY MARTINEZ whose telephone number is (703)756-5651. The examiner can normally be reached Monday thru Friday 8AM-4PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jorge L. Ortiz-Criado can be reached at (571) 272-7624 on Monday thru Friday, 7AM-7PM ET. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.M./Examiner, Art Unit 2496
/JORGE L ORTIZ CRIADO/Supervisory Patent Examiner, Art Unit 2496