DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/17/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “A computing unit configured to verify ...” in claim 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 12, 14-15, 17-18 and 20-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aloisio et al., US-11652839-B1 (hereinafter “Aloisio ‘839”).
Per claim 12 (independent):
Aloisio ‘839 discloses: A method for verifying information technology security of a computing unit system which includes at least one computing unit (FIG. 1, [Col. 4], ll.23-33, to perform a security assessment of an aviation system (verifying information technology security of a computing unit system) ... Aviation system 3 may include a plurality of systems (at least one computing unit), including sub-systems and components of such systems, of aircraft 1), the method comprising the following steps:
providing an information technology model of at least a part of the computing unit system;
carrying out at least one information security test attack on the model
(FIG. 1, [Col. 6], ll. 28-65, A system maintainer, administrator, security engineer, or user, may install create, import, and/or generate an attack tree model for aviation system 3 (providing an information technology model of at least a part of the computing unit system). Once the attack tree model (the model) is created, the user may schedule and/or run test procedures on aviation system 3 (carrying out at least one information security test attack on the model) ... Analysis computing system 2 may be configured to models risks using attack tree models, which may be developed to support systematic investigation of attack modalities (for carrying out at least one information security test attack) ... The assessments 29 that may be performed by analysis computing system 2 using attack tree models (the model) may include the use of static analysis tools, system state tests and monitoring, platform configuration tests, external probes, and/or function hooking and active monitors – examples of carrying out at least one information security test attack);
receiving data from the model that characterize a response of the model to the information security test attack (FIG. 2, [Col. 9], 27-63, Attack tree module 10 (of the analysis computing system 2 in FIG. 1 and 2; using the attack tree model(s) 18 of the analysis computing system 2 in FIG. 2) utilizes the information provided by test agents 12 based on the monitoring and assessment of systems, sub-systems, and/or components of aviation system 3. Using the information provided by import/export module 14 (through which receiving data from the model) and test agents 12, attack tree module 10 is capable of performing risk modeling and analysis operations (after receiving data from the model) to determine whether events are occurring, have occurred, potentially may occur, identify any potential vulnerabilities, risks, or malicious code (e.g., malware) (the information security test attack) associated with execution of processes in aviation system 3 ... The attack tree model design interface may enable the user to generate an attack tree model (that characterize a response of the model to the information security test attack), such as attack tree model(s) 18, for aviation system 3);
determining at least one parameter in accordance with the received data;
evaluating the at least one parameter and assessing the information security related security of the computing unit system in accordance with the evaluation
(FIG. 2, [Col. 9], 27-63, Attack tree module 10 (of the analysis computing system 2; using the attack tree model(s) 18 of the analysis computing system 2 in FIG. 2) utilizes the information provided by test agents 12 based on the monitoring and assessment of systems, sub-systems, and/or components of aviation system 3 (the computing unit system). Using the information provided by import/export module 14 (in accordance with the received data from the model, such as, the attack tree model(s) 18, which is illustrated in FIG. 3) and test agents 12 (determining at least one parameter; as will be described below with reference to FIG. 10, the test agents 12 determines the at least one parameter, from among parameters obtained via various analysis and monitoring tools, that corresponds to a selected attack tree model, that is, in accordance with the received data), attack tree module 10 is capable of performing risk modeling and analysis operations (evaluating the at least one parameter) to determine whether events are occurring, have occurred, potentially may occur, identify any potential vulnerabilities, risks, or malicious code (e.g., malware) associated with execution of processes in aviation system 3 (assessing the information security related security of the computing unit system in accordance with the evaluation); FIG. 10, [Col. 24], ll.1-8, Test agents 12, as illustrated in FIG. 10 , may include one or more static analysis tools 230, one or more system state monitors 232, one or more active monitors (e.g., function and/or API hooks), one or more platform configuration test modules 236, and one or more external probes 238. Test agents 12 are part of analysis computing system 2 – examples of determining at least one parameter).
Per claim 14 (dependent on claim 12):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 12 above, incorporated herein by reference.
Aloisio ‘839 discloses: The method according to claim 12, wherein the carrying out of the at least one information security test attack includes:
carrying out at least one actual information security attack on the model; and/or
implementing at least one attack effect of an actual information security attack in the model
(FIG. 1, [Col. 6], ll. 28-65, Once the attack tree model (the model) is created, the user may schedule and/or run test procedures on aviation system 3 (carrying out of the at least one information security test attack) ... Analysis computing system 2 may be configured to models risks using attack tree models, which may be developed to support systematic investigation of attack modalities (for carrying out at least one information security test attack); FIG. 2, [Col. 15], ll.37-52, Attack tree module 10 may use group definition data 26 to instruct particular test agents 12 to gather data from particular assets ... Application 5 may perform automated evaluations and computations on attack tree models (carrying out at least one actual information security attack on the model), testing on-line to see whether particular vulnerabilities are present or known-weak configurations or libraries are in use, then computing metrics and costs based on component metrics – implementing at least one attack effect).
Per claim 15 (dependent on claim 14):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 14 above, incorporated herein by reference.
Aloisio ‘839 discloses: The method according to claim 14, wherein the carrying out of the at least one information security test attack includes:
(i) extracting first attack information from a first database, in which information about a plurality of actual information security attacks is stored, and carrying out the at least one actual information security attack on the model in accordance with the extracted first attack information; and/or
(ii) extracting second attack effect information from a second database, in which information about a plurality of attack effects is stored, and implementing the at least one attack effect in the model in accordance with the extracted second attack effect information
(FIG. 1, [Col. 6], ll. 28-65, Once the attack tree model (the model) is created, the user may schedule and/or run test procedures on aviation system 3 (carrying out of the at least one information security test attack); FIG. 2, [Col. 9], ll.6-27, application 5 may receive data from local knowledge base 16 and central knowledge base 28 (a first database) using import/export module 14 (extracting first attack information from a first database) ... Central knowledge base 28 may include data associated with common vulnerabilities to aviation systems and/or known attacks (information about a plurality of actual information security attacks is stored) that may be initiated against such systems. Much of the data included in central knowledge base 28 may include vendor- or community-provided data that is updated over time as more information becomes available; [Col. 15], ll.37-52, Attack tree module 10 may use group definition data 26 to instruct particular test agents 12 to gather data from particular assets ... Application 5 may perform automated evaluations and computations on attack tree models (carrying out at least one actual information security attack on the model), testing on-line to see whether particular vulnerabilities are present or known-weak configurations or libraries are in use, then computing metrics and costs based on component metrics).
Per claim 17 (dependent on claim 12):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 12 above, incorporated herein by reference.
Aloisio ‘839 discloses: The method according to claim 12, further comprising:
carrying out a specified action on the computing unit system in accordance with the evaluation
(FIG. 2, [Col. 9], 27-63, Attack tree module 10 may utilize graphical user interface module 8 to provide graphical representations, such as graphical representations of metrics of vulnerabilities and risks (in accordance with the evaluation), within a graphical user interface that is output to a user (e.g., analyst or technician). Based on the output provided by GUI module 8, a user may determine what corrective or preventive actions (carrying out a specified action) to take. In some examples, such actions make take place in a development process (e.g., modifying code or configuration information to mitigate or eliminate such vulnerabilities or risks), by updating software, making configuration changes, removing systems, sub-systems, and/or components of aviation system 3 (on the computing unit system), and so on).
Per claim 18 (dependent on claim 17):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 17 above, incorporated herein by reference.
Aloisio ‘839 discloses: The method according to claim 17, wherein the carrying out of the specified action includes:
carrying out an update of one or more components of the computing unit system; and/or
modifying a configuration and/or settings of one or more components of the computing unit system; and/or
replacing one or more components of the computing unit system; and/or
adding one or more new components to the computing unit system
(FIG. 2, [Col. 9], 27-63, Attack tree module 10 may utilize graphical user interface module 8 to provide graphical representations, such as graphical representations of metrics of vulnerabilities and risks, within a graphical user interface that is output to a user (e.g., analyst or technician). Based on the output provided by GUI module 8, a user may determine what corrective or preventive actions (the carrying out of the specified action) to take. In some examples, such actions make take place in a development process (e.g., modifying code or configuration information to mitigate or eliminate such vulnerabilities or risks), by updating software, making configuration changes, removing systems, sub-systems, and/or components of aviation system 3, and so on).
Per claim 20 (independent):
The limitations of the claim(s) correspond(s) to features of claim 1 and the claim(s) is/are
rejected for the reasons detailed with respect to claim 1.
Per claim 21 (independent):
The limitations of the claim(s) correspond(s) to features of claim 1 and the claim(s) is/are
rejected for the reasons detailed with respect to claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aloisio ‘839 in view of Biondi et al., US-20240362335-A1 (hereinafter “Biondi ‘335”).
Per claim 13 (dependent on claim 12):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 12 above, incorporated herein by reference.
Aloisio ‘839 discloses: The method according to claim 12, wherein the providing of the information technology model includes:
events of an attack on stored data integrity
(FIG. 3, [Col. 18], ll.49 – [Col. 19], ll.9, an example attack tree model 50 (the providing of the information technology model) ... rectangular boxes correspond to tree nodes of a particular attack tree model of attack tree model(s) 18 ... a tree node 56 corresponds to the event of an attack on stored data integrity).
Aloisio ‘839 discloses that different types of “information technology models” can be readily incorporated by adding additional tree nodes to an attack tree model. Biondi ‘335 discloses: replicating a hardware of the computing unit system or at least the part of the computing unit system; and/or
replicating a current software status of the computing unit system or at least the part of the computing unit system; and/or
creating a virtual machine that simulates the computing unit system or at least the part of the computing unit system
(FIG. 1, [0025], a malware evaluation system in a networked environment; [0026], When a computer instruction sequence of interest (a current software status) is identified in smartphone 124 (the computing unit system), the antimalware module 128 sends the instruction sequence (i.e., replicating a current software status) to a malware evaluation system 102 for further investigation; FIG. 3, [0038], a method of group testing instruction sequence samples for malicious behavior ... At 302 ... by capturing instruction sequences (replicating a current software status of the computing unit system) that have suspicious characteristics or that are unknown in end user devices ... a virtual machine sandbox is launched for each of the groups (creating a virtual machine that simulates the computing unit system) at 306; [0039], At 308, each of the plurality of computer instruction sequences of interest are executed in their associated virtual machines sandboxes; [0040], At 310, such automated processes and/or malware researcher determine whether each of the groups has at least one instruction sequence of interest that is likely malicious).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Aloisio ‘839 with the determination of each of the groups has at least one instruction sequence of interest, which is captured from end user devices, that is likely malicious by executing them on a VM sandbox as taught by Biondi ‘335 because it would help detect malware, or so that other appropriate action can be taken in a safer environment such as a VM sandbox [0040][0020]. Additionally, Biondi ‘335 is analogous to the claimed invention because it teaches a method of group testing instruction sequence samples for malicious behavior [0038].
Per claim 19 (dependent on claim 12):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 12 above, incorporated herein by reference.
Aloisio ‘839 does not disclose but Biondi ‘335 discloses: The method according to claim 12, wherein the evaluating of the at least one parameter includes a comparison of the at least one parameter with at least one threshold value (FIG. 1, [0027], When it is time to evaluate the gathered instruction sequence samples 118, such as when a threshold time (at least one threshold) since the last evaluations has passed or a certain number of instruction sequence samples (at least one threshold) have been received (a comparison of the at least one parameter with at least one threshold value), the instruction sequence samples are divided into groups).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Aloisio ‘839 with the determination of each of the groups has at least one instruction sequence of interest, which is captured from end user devices, that is likely malicious by dividing instruction sequence samples into groups to be monitored based on the time or the number of sequence samples as taught by Biondi ‘335 because the system would improve computational efficiency and speed [0031].
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aloisio ‘839 in view of Schwartau, US- 20190312906-A1 (hereinafter “Schwartau ‘906”).
Per claim 16 (dependent on claim 12):
Aloisio ‘839 discloses the elements detailed in the rejection of claim 12 above, incorporated herein by reference.
Aloisio ‘839 discloses: The method according to claim 12, wherein the at least one parameter includes one or more parameters (FIG. 2, [Col. 9], 27-63, attack tree module 10 is capable of performing risk modeling and analysis operations (evaluating the at least one parameter) to determine whether events are occurring, have occurred, potentially may occur, identify any potential vulnerabilities, risks, or malicious code (e.g., malware) associated with execution of processes in aviation system 3; FIG. 10, [Col. 24], ll.1-8, Test agents 12, as illustrated in FIG. 10 , may include one or more static analysis tools 230, one or more system state monitors 232, one or more active monitors (e.g., function and/or API hooks), one or more platform configuration test modules 236, and one or more external probes 238. Test agents 12 are part of analysis computing system 2 – examples of determining at least one parameter among one or more parameters).
Aloisio ‘839 indicates the inclusion of one or more parameters but does not specifically teach the following parameters but Schwartau ‘906 teaches: one or more of the following parameters:
an attack success rate, which describes a ratio of a number of successful test attacks to a total number of test attacks;
a resistance value, which describes a ratio of a number of unsuccessful test attacks to a total number of test attacks;
a recovery time, which describes a time interval between a first point in time at which the carrying out of a particular test attack is started and a second point in time at which a state of the model is recovered which corresponds to an initial state prior to the carrying out of the particular test attack
([0136], compare two or more security components. For example, both security components can be provided traffic, which includes the test traffic (test attacks) imitating an attack. The respective security components can be evaluated for an amount of time required to detect the attack (e.g., for delay times required for the security component), a success rate (an attack success rate) in detecting the attack (e.g., for the trust factor for the security component), and/or the like. Such information can be used to configure a system including the two or more security components).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Aloisio ‘839 with the evaluation of a success rate in detecting attack for security components in order to configure a system as taught by Schwartau ‘906 because the system would enable effective configuration of security components, thereby improving system security and operational efficiency [0136]. Additionally, Schwartau ‘906 is analogous to the claimed invention because it teaches embodiments of a system described herein can be used to measure an effectiveness of a security component [0135].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGSEOK PARK whose telephone number is (571)272-4332. The examiner can normally be reached Monday-Friday 7:30-5:30 and Alternate Fridays 9:00 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PHILIP CHEA can be reached at (571)272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANGSEOK PARK/Primary Examiner, Art Unit 2499