DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the communication filed on 12/24/2022.
Claims 1-20 are pending.
Examiner's Note
In light of the specification paragraph [0032] which states: “A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media.” The term “a computer readable storage medium” as recited in claim 16 has been interpreted as covering only non-transitory computer readable storage medium.
Claim Objections
Claims 11 and 16 are objected to because of the following informalities:
The limitation “a network interface that connects the local device to one or more remote web sites; and” as recited in claim 11 is redundant/a typo and should be deleted.
“A computer program product for assessing security information and event management (SIEM) environments comprising stored in a computer readable storage medium, comprising computer program code that, when executed by the computer program product, performs actions comprising:” as recited in claim 16 should read “A computer program product for assessing security information and event management (SIEM) environments comprising a computer readable storage medium, the computer readable storage medium comprising computer program code that, when executed by the computer program product, performs actions comprising:”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
The terms “bad”, “good”, and “excellent” as recited in claim 9 are relative terms which render the claim indefinite. The terms are not defined by the claim, the specification does not provide standards for ascertaining the requisite degrees, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. In this case, the meaning of the terms “bad”, “good”, and “excellent” vary depending on the context, situation and/or individuals involved. Thus, claim 9 is indefinite. Claim 10 is rejected for inheriting the deficiency of the claim from which it depends on.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 9, 11, 13-14, 16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Bruskin (US 8,782,784) in view of Hutelmyer et al. (US 2022/0311795) further in view of Street (US 2020/0327221).
Claim 1, Bruskin teaches:
A method for assessing effectiveness of security information and event management (SIEM) environments comprising:
receiving, from the TDI component, TDI performance scores for each used rule by the SPEAR tool; (e.g., col. 4-5, “SIEM framework 50 includes eight components: business alignment 52, asset scoping 54, metrics & personnel 56, asset management 58, infrastructure management 60, SIEM operations 62, governance & compliance 64, and key performance indicators 66. It should be understood that a decision to move beyond the first phase of the SIEM implementation program is based on an overall score aggregated from the scores associated with each of the eight components. In some arrangements, the overall score is an average of the scores associated with the eight components. In other arrangements, the overall score is a weighted average; in still other arrangements, the overall score is the minimum score associated with the eight components. It should also be understood that the score associated with each component is an aggregate of scores associated with five categories: vision and strategy, people, infrastructure, management, and execution…The score of the component, in some arrangements, is an average of the scores of each category. Details concerning each of the eight components will be discussed below with respect to FIGS. 4(a)-4(h)…It should be understood that the score assigned to each of the categories of the subcomponents is, in some arrangements, based on a standard maturity model such as the Capability Maturity Model.RTM.. Further details of the Capability Maturity Model.RTM. are described below with respect to FIG. 5” col. 16, “Data in use requirements subcomponent 142 includes the five categories described above. The vision and strategy category is scored a 1 when there is recognition that SIEM content can be sensitive, a 2 when a conscious effort has been made to determine whether logs need to be protected while in use, a 3 when a requirement states that raw information should or should not be protected, a 4 when data in use strategy exists and is verbally communicated, and a 5 when data in use strategy is in place. The people category is scored a 1 when someone knows about the data in use policy and how it applies to SIEM, a 2 when data in use is communicated that it meets enterprise requirements, a 3 when there is enterprise awareness of sensitive data resting, a 4 when there are controls or checks on classification of data sets prior to retrieval, and a 5 when the enterprise continues to improve data security through log separation. The infrastructure category is scored a 1 when data is protected by limited access to the machine, data in, and data out, a 2 when data is protected in some form from end users accessing it, a 3 when data meets or exceeds in motion requirements through a single effort, a 4 when data is protected while in use, routinely reviewed, and checked to ensure it meets compliance, and a 5 when data is protected via key rotation, thoroughly documented, and backup system in place. The management category is scored a 1 when management is aware of data in use policy within the enterprise, a 2 when management has applied data in use requirements to SIEM and log management properties, a 3 when management enforces and follows data in use policy, a 4 when management is able to repeat how the data is classified within SIEM as to how the data is protected in use, and a 5 when management documents and publishes data in use. The execution category is scored 0-5 based on the quality of the execution of the tasks”)
receiving, from the TDI component, TDI quality scores for the each used rule by the SPEAR tool; (e.g., col. 4-5, “SIEM framework 50 includes eight components: business alignment 52, asset scoping 54, metrics & personnel 56, asset management 58, infrastructure management 60, SIEM operations 62, governance & compliance 64, and key performance indicators 66. It should be understood that a decision to move beyond the first phase of the SIEM implementation program is based on an overall score aggregated from the scores associated with each of the eight components. In some arrangements, the overall score is an average of the scores associated with the eight components. In other arrangements, the overall score is a weighted average; in still other arrangements, the overall score is the minimum score associated with the eight components. It should also be understood that the score associated with each component is an aggregate of scores associated with five categories: vision and strategy, people, infrastructure, management, and execution…The score of the component, in some arrangements, is an average of the scores of each category. Details concerning each of the eight components will be discussed below with respect to FIGS. 4(a)-4(h)…It should be understood that the score assigned to each of the categories of the subcomponents is, in some arrangements, based on a standard maturity model such as the Capability Maturity Model.RTM.. Further details of the Capability Maturity Model.RTM. are described below with respect to FIG. 5” col. 14, “Asset collection standards subcomponent 134 includes the five categories described above. The vision and strategy category is scored a 1 when a vision exists to set log collection standards, a 2 when a vision has been verbally discussed, a 3 when a strategy exists on how to collect all information within the enterprise, a 4 when there is a documented vision on standards for log collection, and a 5 when asset collection standards are reviewed yearly and determined best of breed approach. The people category is scored a 1 when someone knows about the consistent plan on collecting information, a 2 when some people know and understand the importance of central log management, a 3 when people are able to communicate to end users about the definition of log collection and integration methods, a 4 when there is regular communication for collection standards and types, and a 5 when users always refer back to the capabilities and collection types the enterprise has defined. The infrastructure category is scored a 1 when a how-to document exists on how logs are collected, a 2 when a published document exists within the internal organization on protocol and log management integrations, a 3 when people are automatically forced to configure their devices according to collection standards, a 4 when end users are ensured of following collection standards, and a 5 when all key stakeholders sign off and approve. The management category is scored a 1 when management is aware of need to have consistency in collection integrations, a 2 when management enforces users to integrate to ensure compliance with enterprise objectives, a 3 when management is involved in key decisions through some document, a 4 when management communicates to business the standards that the SIEM team can collect logs and defines them to fellow management, and a 5 when management and key stakeholders look for optimization and review annually. The execution category is scored 0-5 based on the quality of the execution of the tasks”)
determining, by the SPEAR tool, an availability score, a performance score, and a quality score from the rule status information, the log source status information, the TDI performance scores, and the TDI quality scores; and (e.g., col. 4-5, “SIEM framework 50 includes eight components: business alignment 52, asset scoping 54, metrics & personnel 56, asset management 58, infrastructure management 60, SIEM operations 62, governance & compliance 64, and key performance indicators 66. It should be understood that a decision to move beyond the first phase of the SIEM implementation program is based on an overall score aggregated from the scores associated with each of the eight components. In some arrangements, the overall score is an average of the scores associated with the eight components. In other arrangements, the overall score is a weighted average; in still other arrangements, the overall score is the minimum score associated with the eight components. It should also be understood that the score associated with each component is an aggregate of scores associated with five categories: vision and strategy, people, infrastructure, management, and execution…The score of the component, in some arrangements, is an average of the scores of each category. Details concerning each of the eight components will be discussed below with respect to FIGS. 4(a)-4(h)…It should be understood that the score assigned to each of the categories of the subcomponents is, in some arrangements, based on a standard maturity model such as the Capability Maturity Model.RTM.. Further details of the Capability Maturity Model.RTM. are described below with respect to FIG. 5” col. 14, “Asset collection standards subcomponent 134 includes the five categories described above…Data classification subcomponent 136 includes the five categories described above. The vision and strategy category is scored a 1 when there is recognition that log could contain sensitive information, a 2 when a conscious effort has been made to determine whether logs should be viewed by certain people, a 3 when a data classification policy exists within the organization, a 4 when management approves the vision and strategy, and a 5 when data classification policy is applicable and applied to the SIEM. The people category is scored a 1 organizational awareness of sensitive data exists, a 2 when data classification and sensitivity exists within a security team, a 3 when someone knows about the data classification policy and how it applies to SIEM, a 4 when there are controls or checks on classification of data sets, and a 5 when the enterprise continues to improve data classification through log separation. The infrastructure category is scored a 1 when some form of data classification exists within the enterprise, a 2 when some form of data classification exists within SIEM, a 3 when data classification determines transport protocol, a 4 when a technical audit is in place to ensure that collection of sensitive data is protected, and a 5 when the enterprise is capable of reporting on sensitive log information. The management category is scored a 1 when management is aware of data classification policy within the enterprise, a 2 when management has applied data classification requirements to SIEM and log management properties, a 3 when management enforces and follows data classification policy, a 4 when management is able to repeat how the data is classified within SIEM, and a 5 when management documents and publishes data classification. The execution category is scored 0-5 based on the quality of the execution of the tasks” col. 16-17, “Data in use requirements subcomponent 142 includes the five categories described above…Security log audit subcomponent 144 includes the five categories described above. The vision and strategy category is scored a 1 when there is a concept of auditing to ensure log elements are present, a 2 when a vision exists to ensure correct log data, a 3 when the vision has been documented, a 4 when log auditing is managed and measureable via the vision, and a 5 when routine optimization is present. The people category is scored a 1 when there is a concept of people auditing to ensure log elements are present, a 2 when the SIEM team understands the importance of ensuring correct log data, a 3 when a process is in place to ensure awareness of log elements in the log stream, a 4 when log element auditing is communicated at all levels, and a 5 when continual optimization is present. The infrastructure category is scored a 1 when there is capability to review raw log data inside the system, a 2 when capability exists to map requirement elements to what is in SIEM tool, a 3 when capability exists to report on corrupted log data incoming into the SIEM, a 4 when automation exists to ensure log elements are present inside the log stream, and a 5 when optimization has been considered to ensure log element auditing. The management category is scored a 1 when management is aware of the need to ensure log element auditing, a 2 when management provides resources to audit effectiveness of logging, a 3 when management receives reports on logging posture, a 4 when there is trending capability to ensure that logging posture is acceptable, and a 5 when continual improvement from a management perspective is present. The execution category is scored 0-5 based on the quality of the execution of the tasks”)
determining, by the SPEAR tool, SPEAR from the availability score, the performance score, and the quality score. (e.g., col. 4-5, “SIEM framework 50 includes eight components: business alignment 52, asset scoping 54, metrics & personnel 56, asset management 58, infrastructure management 60, SIEM operations 62, governance & compliance 64, and key performance indicators 66. It should be understood that a decision to move beyond the first phase of the SIEM implementation program is based on an overall score aggregated from the scores associated with each of the eight components. In some arrangements, the overall score is an average of the scores associated with the eight components. In other arrangements, the overall score is a weighted average; in still other arrangements, the overall score is the minimum score associated with the eight components. It should also be understood that the score associated with each component is an aggregate of scores associated with five categories: vision and strategy, people, infrastructure, management, and execution…The score of the component, in some arrangements, is an average of the scores of each category. Details concerning each of the eight components will be discussed below with respect to FIGS. 4(a)-4(h)…It should be understood that the score assigned to each of the categories of the subcomponents is, in some arrangements, based on a standard maturity model”)
Bruskin teaches a threat detection insight (TDI) component, a rule status information, a SIEM production effectiveness assessment report (SPEAR) tool (see above) and does not appear to explicitly teach but Hutelmyer teaches:
receiving, from a threat detection insight (TDI) component, a rule status information by a SIEM production effectiveness assessment report (SPEAR) tool wherein the rule status information includes a number of used rules and a number of unused rules. (e.g., [0043], “The stream of network logs can be transmitted from the network devices 108A-N to the network analysis system 104 (D)” [0044], “Once the network analysis system 104 receives the network logs, the system 104 can access rules (e.g., detection signatures)…Which rules are used for detection can be determined based on settings that are configured by the users…The users can set rules as active or inactive, and as test or “live.” The system 104 can access rules specific to the stream of network logs. The system 104 can also access rules that can be applied to one or more security information management (STEM) operations performed by the network analysis system 104” [0045], “Using the rules, the network analysis system 104 can analyze the network logs (F). Analyzing the network logs can include comparing a currently selected log event to a complete set of active rules and looking for logical matches between the two. Analyzing the network logs can include determining whether any of the rules are triggered by activity in the logs. Triggered rules can be identified by the network analysis system 104. Notification of such triggers can be transmitted to the alerting system 106 (F)”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Hutelmyer into the invention of Bruskin, and the motivation for such an implementation would be for the purpose of ensuring a system responds to ever-changing network security events or threats (Hutelmyer [0005]-[0006]).
Bruskin-Hutelmyer teaches the TDI component and the SPEAR tool (see above) and does not appear to explicitly teach but Kosaka teaches:
receiving, from a TDI component, a log source status information by a SPEAR tool wherein the log source status includes a total number of log sources. (e.g., [0059], “log sources and/or components in a network may be identified…SIEM may receive data log from the log sources” [0064], “the IR team may review the log sources in a Log Inventory Tracking Sheet (LITS)” claim 1, “identifying a plurality of log sources in a network; receiving log data from each of the plurality of log sources; for each log source, generating a log quality index value”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Street into the invention of Bruskin-Hutelmyer, and the motivation for such an implementation would be for the purpose of determining whether a log source is providing value to a security investigation, and quantifying a value of an individual log source sent to a SIEM to enable security analysts to move from intuitive, subjective assessments of logs towards one which quantitatively measures a log quality which provides a useful communication tool between security engineers and security analysts in determining which logs should be captured, relied upon and then maintained for incident response and feedback (Street [0005], [0010]).
Claim 3, Bruskin-Hutelmyer-Street teaches:
wherein the SPEAR is a value. (e.g., Bruskin col. 4 ll. 40-col. 5 ll. 26)
Claim 4, Bruskin-Hutelmyer-Street teaches:
wherein the value is between 0 and 100. (e.g., Bruskin col. 4 ll. 40-col. 5 ll. 26)
Claim 9, Bruskin-Hutelmyer-Street teaches:
wherein the SPEAR is broken into subranges of the value and wherein a first subrange of 0 to x is bad, a second subrange from x+1 to y is needs improvement, a third subrange from y+1 to z is good, and a fourth subrange from z+1 to 100 is excellent. (e.g., Bruskin fig. 5, col. 5 ll. 27-43)
Claim 11, this claim is directed to a system containing similar limitations as recited in claim 1 and is rejected using the same rationale to combine the references. The one or more processors; a memory coupled to at least one of the processors; and a set of computer program instructions stored in the memory and executed by at least one of the processors to perform the actions are taught in col. 25 ll. 4-24 of Bruskin.
Claim 13, this claim is directed to a system containing similar limitations as recited in claim 3 and is rejected using the same rationale to combine the references.
Claim 14, this claim is directed to a system containing similar limitations as recited in claim 4 and is rejected using the same rationale to combine the references.
Claim 16, this claim is directed to a system containing similar limitations as recited in claim 1 and is rejected using the same rationale to combine the references. The computer readable storage medium, comprising computer program code that, when executed by the computer program product performs the actions are taught in col. 25 ll. 4-24 of Bruskin.
Claim 18, this claim is directed to a computer program product containing similar limitations as recited in claim 3 and is rejected using the same rationale to combine the references.
Claim 19, this claim is directed to a computer program product containing similar limitations as recited in claim 4 and is rejected using the same rationale to combine the references.
Claims 2, 12, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bruskin (US 8,782,784) in view of Hutelmyer et al. (US 2022/0311795) in view of Street (US 2020/0327221) further in view of Kosaka et al. (US 2020/0218798).
Claim 2, Bruskin-Hutelmyer-Street teaches wherein the rule status information includes a number of active rules, a number of passive rules (e.g., Hutelmyer [0044]) and does not appear to explicitly teach but Kosaka teaches:
a number of disabled rules and wherein a number of used rules is calculated by adding a number of active rules to a number of passive rules and wherein a number of unused rules is the number of disabled rules and wherein a total number of rules is calculated by adding the number of used rules to the number of unused rules. (e.g., [0096])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Kosaka into the invention of Bruskin-Hutelmyer-Street, and the motivation for such an implementation would be for the purpose of automatically deploying application security policy and monitoring security risks amongst application containers (Kosaka [0001]-[0003]).
Claim 12, this claim is directed to a system containing similar limitations as recited in claim 2 and is rejected using the same rationale to combine the references.
Claim 17, this claim is directed to a computer program product containing similar limitations as recited in claim 2 and is rejected using the same rationale to combine the references.
Claims are rejected under 35 U.S.C. 103 as being unpatentable over Bruskin (US 8,782,784) in view of Hutelmyer et al. (US 2022/0311795) in view of Street (US 2020/0327221) further in view of Grondin et al. (US 2015/0326601).
Claim 5, Bruskin-Hutelmyer-Street teaches the value, the availability score, the performance score, the quality score (see above) and does not appear to explicitly teach but Grondin teaches:
wherein a value is a product of an availability score, a performance score, and a quality score. (e.g., [0074], [0147], [0186])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Grondin into the invention of Bruskin-Hutelmyer-Street, and the motivation for such an implementation would be for the purpose of protecting sensitive data (Grondin [0003]).
Claim 15, this claim is directed to a system containing similar limitations as recited in claim 5 and is rejected using the same rationale to combine the references.
Claim 20, this claim is directed to a computer program product containing similar limitations as recited in claim 5 and is rejected using the same rationale to combine the references.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Bruskin (US 8,782,784) in view of Hutelmyer et al. (US 2022/0311795) in view of Street (US 2020/0327221) further in view of Corl, JR. et al. (2002/0143724).
Claim 6, Bruskin-Hutelmyer-Street teaches the availability score, the number of used rules, the total number of rules (see above) and does not appear to explicitly teach but Corl teaches:
deriving an availability score from a number of used rules divided by a total number of rules. (e.g., [0049])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Corl into the invention of Bruskin-Hutelmyer-Street, and the motivation for such an implementation would be for the purpose of maximizing the rate at which a rules database is applied to network packets or frames (Corl [0014]).
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Bruskin (US 8,782,784) in view of Hutelmyer et al. (US 2022/0311795) in view of Street (US 2020/0327221) in view of Corl, JR. et al. (2002/0143724) further in view of Dargude et al. (US 11,301,568).
Claim 7, Bruskin-Hutelmyer-Street-Corl teaches the quality score, the TDI quality scores, the total number of used rules (see above) and does not appear to explicitly teach but Dargude teaches:
deriving a quality score from a sum of TDI quality scores divided by a total number of used rules. (e.g., col. 2 ll.9-23, col. 10 ll. 1-4, 25-30)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Dargude into the invention of Bruskin-Hutelmyer-Street-Corl, and the motivation for such an implementation would be for the purpose of protecting against risks effectively (Dargude col. 1).
Claim 8, Bruskin-Hutelmyer-Street-Corl teaches the performance score, the TDI performance scores, the number used rules (see above) and does not appear to explicitly teach but Dargude teaches:
deriving a performance score from a sum of TDI performance scores divided by a number used rules. (e.g., col. 2 ll.9-23, col. 10 ll. 1-4, 25-30)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Dargude into the invention of Bruskin-Hutelmyer-Street-Corl, and the motivation for such an implementation would be for the purpose of protecting against risks effectively (Dargude col. 1).
Allowable Subject Matter
Claim 10 would be allowable if rewritten (1) in independent form including all of the limitations of the base claim and any intervening claims and (2) to overcome the 101, or 112 rejection(s) set forth in this Office action.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 12,547,727 teaches techniques for generating security framework information for SIEM systems.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIE C LIN whose telephone number is (571)272-7752. The examiner can normally be reached M-F 9:00AM -5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GELAGAY SHEWAYE can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIE C. LIN/Primary Examiner, Art Unit 2436