Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This communication is in response to the amendment filed on 12/11/2025. The Examiner acknowledges amended claims 1-18. No claims have been cancelled or added. Claims 1-18 are pending and claims 1-18 are rejected. Claims 1, 7, and 13 is/are independent.
The rejection(s) of claims 13-18 under 35 U.S.C. § 101 as reading on a transitory form of signal transmission are withdrawn in view of Applicant's amendments to include “non-transitory”.
The rejection(s) of claims 1, 3-7, 9-13, and 15-18 under 35 U.S.C. § 101 are maintained as being directed to an abstract idea without significantly more.
The rejection(s) of claims under 35 U.S.C. § 103 are maintained as indicated below.
Response to Arguments
Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. Claim 1 continues to be rejected under 35 U.S.C. § 101 as being directed to an abstract idea without significantly more, and is still rejected by the combination of Salajegheh et al. U.S. Publication 20160277435 (hereinafter “Salajegheh”) in view of Muddu et al. U.S. Patent No. 9516053 (hereinafter “Muddu”).
Regarding claim 1, applicant argues (see Remarks, page 6, bottom four paragraphs) that:
Without conceding the propriety of the rejections, Applicant has amended independent claims 1, 7, and 13 to incorporate aspects of claim 2, which was not rejected under 35 U.S.C. § 101.
Additionally, the Office Action rejected claims 13-18 under 35 U.S.C. § 101 because they recite "processor-readable medium" and the broadest reasonable interpretation may include a transitory form of signal transmission.
Applicant has amended claims 13-18 to recite "non-transitory," as recommended in page 8 of the Office Action. Accordingly, Applicant submits that amended claims 13-18 are directed to statutory subject matter.
In view of the foregoing, Applicant requests that the rejections of claims 1, 3-7, and 9-18 under 35 U.S.C. § 101 be withdrawn.
Examiner respectfully disagrees. Examiner submits that the claims 1, 3-7, 9-13, and 15-18 can still be rejected under 35 U.S.C. § 101 as being directed to an abstract idea without significantly more. The amended claim 1 only incorporates the preamble of claim 2 and not the entire claim 2. Claim 1 needs to incorporate all of the limitations of claim 2 to provide more detail regarding how the execution data is generated. The type or source of data is immaterial if the active step remains to be merely accessing the data. The accessing step is not necessarily the one doing the instrumenting, and the accessing step under the broadest reasonable interpretation can be interpreted as merely accessing a data source storing data that an instrumentation tool generated and stored. Claim 1 should incorporate all the limitations of claim 2 so that the claim is not reciting generally applying the abstract idea without placing any limits on how the trained ML model functions. These limitations of claim 1 only recite the outcome of “processing the data” and “providing an indication of whether the at least one OSS component exhibits normal behavior or exhibits potential threat behavior” and do not include any details about how the “processing” and “providing” are accomplished. See MPEP 2106.05(f). For this reason, the rejection under 35 U.S.C. 101 is maintained
Regarding claim 1, applicant argues (see Remarks, page 7, second paragraph through page 9, bottom paragraph) that:
Applicant submits that Salajegheh and Muddu fail to teach or suggest every feature of amended independent claim 1, as discussed below.
Salajegheh relates to "device-specific classifiers in a privacy-preserving behavioral monitoring and analysis system for crowd-sourcing of device behaviors." (Salajegheh, Abstract.) Salajegheh does not mention open source at all. Accordingly, Salajegheh fails to teach or suggest "the at least one instrumented OSS component is instrumented by an instrumentation tool" or "providing an indication of whether the at least one instrumented OSS component exhibits normal behavior or exhibits potential threat behavior," as recited in amended independent claim 1.
Muddu relates to "techniques and mechanisms to detect security related anomalies and threats in a computer network environment." (Muddu, Abstract.) Muddu mentions open source or open-source in the following excerpts:
[citations omitted]
As shown by the excerpts above, each mention of open source or open-source in Muddu refers to use of open source software to implement Muddu's security system. Significantly, nothing in Muddu teaches or suggests instrumenting open source components, and nothing in Muddu suggests providing an indication of whether instrumented OSS components exhibit normal behavior or exhibit potential threat behavior. Accordingly, Applicant submits that Muddu also fails to teach or suggest "the at least one instrumented OSS component is instrumented by an instrumentation tool" or "providing an indication of whether the at least one instrumented OSS component exhibits normal behavior or exhibits potential threat behavior," as recited in amended independent claim 1.
Furthermore, Applicant submits that it would not be obvious to provide instrumented OSS components. Aspects of open source software are well recognized:
Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.[1][2] Open-source software may be developed in a collaborative, public manner. Open-source software is a prominent example of open collaboration, meaning any capable user is able to participate online in development, making the number of possible contributors indefinite. The ability to examine the code facilitates public trust in the software.[3]
See, for example, Exhibit A, which is provided as evidence under 37 CFR § 1.132. As discussed in Exhibit A, OSS components are developed in a collaborative, public manner and any user is able to participate in developing and examining the code. The public nature of OSS components facilitates public trust in OSS components. Because of such public nature and public trust, Applicant submits there is no motivation to instrument an OSS component and determine whether an instrumented OSS component exhibits normal behavior or exhibits potential threat behavior. Indeed, the cited references do not teach or suggest instrumenting an OSS component and determining whether an instrumented OSS component exhibits normal behavior or exhibits potential threat behavior.
Accordingly, Applicant submits that Salajegheh and Muddu fail to teach or suggest every feature of amended independent claim 1, whether considered individually or in combination. Thus, Applicant respectfully submits that amended independent claim 1 is not obvious over Salajegheh and Muddu and is patentable.
Examiner respectfully disagrees. Examiner submits that applicant has ignored the teachings of the primary reference Salajegheh and applicant has erroneously focused on the secondary reference Muddu. The primary reference Salajegheh at para. 67 teaches “The behavior observer module 302 may be configured to instrument or coordinate various APIs, registers, counters or other components.” The Salajegheh reference teaches instrumenting components using an instrumentation tool such as the behavior observer module 302 at para. 67. The Salajegheh reference teaches instrumenting components, and providing an indication of whether instrumented components exhibit normal behavior or exhibit potential threat behavior (see rejection below and para. 37). The Salajegheh reference does not mention that the instrumented components are open source software (OSS) components. Muddu discloses accessing data regarding execution of open-source software.
It is well within the skill of one of ordinary skill in the art to instrument open-source software components (for example, open-source software components as taught in Muddu), just like any other non-OSS component. There is no additional technical difficulty instrumenting OSS components when compared to non-OSS components. Open-source software is a legal construct. It is a legal framework for protecting shared software for the benefit of the public. The only distinction between otherwise similar OSS components and non-OSS components is a licensing agreement, and the different licensing agreement does not affect the technical process of instrumentation of the component. Instrumenting open-source software components is not a technical advancement or achievement that is not obvious to one of ordinary skill in the art.
It does not matter that, even if, as Applicant argued, “nothing in Muddu teaches or suggests instrumenting open source components, and nothing in Muddu suggests providing an indication of whether instrumented OSS components exhibit normal behavior or exhibit potential threat behavior” because Muddu is not relied on to teach instrumenting components and Muddu is not relied on to teach providing an indication as required by claim 1. Instead, Muddu is relied on to teach that which is lacking in the primary reference, which is components that are open-source software.
Applicant argues that they have filed the Wikipedia article (“Exhibit A”) as evidence under 37 CFR § 1.132 but no declaration or affidavit has been filed. Thus, the Wikipedia article is considered as other evidence under 37 CFR § 1.132. However, Exhibit A, even when combined with the other attorney arguments and considered in view of the cited references, is insufficient to overcome the rejection.
Even if Exhibit A discloses that “OSS components are developed in a collaborative, public manner and any user is able to participate in developing and examining the code. The public nature of OSS components facilitates public trust in OSS components” as argued by Applicant, it does not necessarily follow that “there is no motivation to instrument an OSS component and determine whether an instrumented OSS component exhibits normal behavior or exhibits potential threat behavior” as argued by applicant, on page 9 second paragraph from the bottom. On the contrary, one of ordinary skill in the art would be motivated to instrument an OSS component simply because they are concerned about potential threats to a network or computing system that may be affected by the instrumented component. Having some public trust in the OSS components does not necessarily mean trusting every self-purported open-source software to behave correctly, especially since “any capable user is able to participate online in development” as quoted by applicant in the middle of page 9. For example, a malicious third party can put in a back door in the open-source software to gain access to sensitive information, if no one catches it. Blindly trusting every software that purports to be open-source and trustworthy would be naïve and leave a network or other computing system exposed to malicious third parties. For this reason, one of ordinary skill in the art would instrument open-source software components to monitor for malicious behavior.
Applicant's arguments filed 12/11/2025 have been fully considered and are unpersuasive. Therefore, the rejection is maintained.
Examiner has considered Applicant's remarks to the extent that they may be applicable to the remaining independent claims (e.g., independent claims 7 and 13) and finds them unpersuasive for the same reasons.
Regarding applicant’s arguments with respect to dependent claims 2-6, 8-12, and 14-18, the dependent claims inherit the limitations of the independent claims from which the dependent claims depend and are rejected for the same reasons. Furthermore, the dependent claims do not recite additional limitations that are allowable, as indicated in the office action below.
Accordingly, Applicant's argument is not persuasive with respect to the ineligible subject matter and not persuasive with respect to the rejection under the cited art, and the rejection is maintained. Applicant's arguments/amendments have been fully considered, but are not persuasive. Note that this action is made FINAL. See MPEP § 706.07(a).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-7, 9-13, and 15-18 is/are rejected under 35 U.S.C. 101 because the claimed inventions is/are directed to an abstract idea without significantly more.
Claim 1
Claim 1 recites a method which is one of the four statutory categories (e.g. process). The method involves accessing data regarding execution by an open source software (OSS) component of an application, in which the OSS component is instrumented by an instrumentation tool, processing the data by a trained machine learning model, the trained ML model machine learning model providing an indication of normal or threatening behavior exhibited by the OSS component and communicating the indication.
Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111. The claim does not provide any details about how the trained machine learning model operates or how the processing is performed, and the plain meaning of “processing” encompasses mental observations or evaluations, e.g., a computer programmer’s mental identification of a potentially threatening behavior exhibited by at least one OSS component. The claim does not limit how the processing is performed, and there is nothing about potentially threatening behavior itself that would limit how it can be analyzed. The claim does not include any additional details that explain the processing of the data or the providing the indication. The broadest reasonable interpretation of the processing and providing step is that this step falls within the mental process grouping of abstract ideas because it covers concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. The claim under its broadest reasonable interpretation recites a mental process and is directed to an abstract idea of processing data regarding execution of at least one open source software component of an application.
There is no recitation of additional elements in claim 1 which integrates the judicial exception into a practical application. Accessing data is insignificant extra-solution activity, and the newly added limitation merely specifies a property of the data being accessed, i.e. that the data being accessed is regarding an instrumented OSS component. Communicating the indication is insignificant post-solution activity. See MPEP 2106.04(d).
The judicial exception of “processing the data by a trained machine learning (ML) model” and “the trained ML model providing an indication of whether the at least one OSS component exhibits normal behavior or exhibits potential threat behavior” is performed “by a trained machine learning (ML) model.” The trained ML model is used to generally apply the abstract idea without placing any limits on how the trained ML model functions. Rather, these limitations only recite the outcome of “processing the data” and “providing an indication of whether the at least one OSS component exhibits normal behavior or exhibits potential threat behavior” and do not include any details about how the “processing” and “providing” are accomplished. See MPEP 2106.05(f).
The recitation of “by a trained machine learning (ML) model” in the limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “by a trained machine learning (ML) model” limits the identified judicial exceptions “processing the data by a trained machine learning (ML) model” and “the trained ML model providing an indication of whether the at least one OSS component exhibits normal behavior or exhibits potential threat behavior”, this type of limitation merely confines the use of the abstract idea to a particular technological environment (computing hardware and software to support machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Claim 1 does not recite an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception. Claim 1 does not recite any additional elements that may limit the use of the data processing to a practical application. Thus, the claim is directed to the recited abstract idea (judicial exception).
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim does not recite any additional elements that provides an inventive concept. The other additional elements of claim 1, such as the trained machine learning model, in combination with the remaining limitations of claim 1 do not result in significantly more. These are simply generically recited computing elements for implementing the abstract idea. The claim recites accessing data but this is insignificant pre-solution activity, i.e. mere data gathering. The final step of communicating the indication does not add a meaningful limitation to the process of processing the data. This is merely insignificant post-solution activity.
The recited accessing and communicating steps of claim 1, as clarified in the specification para. 56-57, 59, and 66, may be performed over a network, are well-known, routine, and conventional. See MPEP 2106.05(d).II.i (“Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink.")
The claim does not recite any additional elements which individually or in combination amount to significantly more. Furthermore, as argued above with respect to step 2A, employing generic computer functions to execute an abstract idea does not add significantly more. The claim is not patent eligible.
The dependent claims depending from claim 1 do not recite any additional steps that individually or in combination with the inherited limitations of claim 1 amount to significantly more.
Claim 3 recites clarification of the data regarding execution which is simply providing additional details of the data gathering and is insignificant extra-solution activity. There are no additional elements recited in claim 3 that individually or in combination with other elements of claim 3 recite a practical application or significantly more.
Claim 4 recites processing data by the trained ML model comprises inputting various types of data. This is simply reciting additional details of using the generic computer element. The generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer system executing the trained ML model. There are no additional elements recited in claim 4 that individually or in combination with other elements of claim 4 recite a practical application or significantly more.
Claim 5 recites the trained ML model comprises a neural network trained by supervised learning. These limitations clarify the trained ML model and merely indicates additional detail regarding a field of use or technological environment in which the judicial exception is performed. This type of limitation merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Claim 6 recites performing continual learning for the trained ML model using new input training data. This is simply reciting additional details of using the generic computer element. The generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer system executing the trained ML model. There are no additional elements recited in claim 6 that individually or in combination with other elements of claim 4 recite a practical application or significantly more.
Independent claim 7 recites a system with features analogous to the features of claim 1 and does not integrate a judicial exception into a practical application and does not recite significantly more for the same reasons as claim 1. The recitation of system, processor, memory, and instructions are merely generic computer elements. The generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea by executing operations on a computer. The generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea using a host computer and generally attempts to link the mental process to a particular technological environment.
Claim 9 recites limitations analogous to the limitations of claim 3, and does not recite eligible subject matter for reasons similar to the rejection of claim 3.
Claim 10 recites limitations analogous to the limitations of claim 4, and does not recite eligible subject matter for reasons similar to the rejection of claim 4.
Claim 11 recites limitations analogous to the limitations of claim 5, and does not recite eligible subject matter for reasons similar to the rejection of claim 5.
Claim 12 recites limitations analogous to the limitations of claim 6, and does not recite eligible subject matter for reasons similar to the rejection of claim 6.
Independent claim 13 recites a processor-readable medium with features analogous to the features of claim 1 and does not integrate a judicial exception into a practical application and does not recite significantly more for the same reasons as claim 1. The recitation of processor-readable medium, instructions, processor, and system are merely generic computer elements. The generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a processor-readable medium executed by a processor of a computer system. The generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea with a host computer and generally attempts to link the mental process to a particular technological environment.
Claim 15 recites limitations analogous to the limitations of claim 3, and does not recite eligible subject matter for reasons similar to the rejection of claim 3.
Claim 16 recites limitations analogous to the limitations of claim 4, and does not recite eligible subject matter for reasons similar to the rejection of claim 4.
Claim 17 recites limitations analogous to the limitations of claim 5, and does not recite eligible subject matter for reasons similar to the rejection of claim 5.
Claim 18 recites limitations analogous to the limitations of claim 6, and does not recite eligible subject matter for reasons similar to the rejection of claim 6.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6-10, 12-16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salajegheh et al. U.S. Publication 20160277435 (hereinafter “Salajegheh”) in view of Muddu et al. U.S. Patent No. 9516053 (hereinafter “Muddu”).
As per claim 1, Salajegheh discloses A method comprising:
[0034] The various aspects provide methods for facilitating crowd-sourcing of smart device behavior information1 in a user privacy-preserving manner for the use of error reporting, malfunction monitoring, and device performance analysis.
accessing data [monitor smart device behavior by collecting behavior information, para. 37] regarding execution of at least one instrumented component of an application, wherein the at least one instrumented component is instrumented by an instrumentation tool; [instrumented component… Use this information to determine whether a particular smart device behavior, condition, sub-system, software application, or process is anomalous, benign, or not benign para. 37; instrumentation tool can be disclosed by behavior observer module (para. 67) which is configured to perform instrumentation of components]
Salajegheh 0037] The various aspects include a comprehensive behavioral monitoring and analysis, and anonymization system for intelligently and efficiently identifying, conditions, factors, and/or smart device behaviors that may degrade a smart device's performance and/or power utilization levels over time or when attacked. In an aspect, an observer process, daemon, module, or sub-system (herein collectively referred to as a “module”) of a smart device may instrument or coordinate various application programming interfaces (APIs), registers, counters or other smart device components (herein collectively “instrumented components”) at various levels of the smart device system. The observer module may continuously (or near continuously) monitor smart device behaviors by collecting behavior information from the instrumented component. The smart device may also include an analyzer module, and the observer module may communicate (e.g., via a memory write operation, function call, etc.) the collected behavior information to the analyzer module. The analyzer module may receive and use the behavior information to generate feature or behavior vectors, generate spatial and/or temporal correlations based on the feature/behavior vectors, and use this information to determine whether a particular smart device behavior, condition, sub-system, software application, or process is anomalous, benign, or not benign (i.e., malicious or performance-degrading). The smart device may then use the results of this analysis to heal, cure, isolate, or otherwise fix or respond to identified problems.
[0067] The behavior observer module 302 may be configured to instrument or coordinate various APIs, registers, counters or other components (herein collectively “instrumented components”) at various levels of the smart device system, and continuously (or near continuously) monitor smart device behaviors over a period of time and in real-time by collecting behavior information from the instrumented components as well as those of other. For example, the behavior observer module 302 may monitor library API calls, system call APIs, driver API calls, and other instrumented components by reading information from log files (e.g., API logs, etc.) stored in a memory of the smart device 102.
processing [determine behavior suspicious, para. 88] the data by a trained machine learning (ML) model, [classifier model may be a robust data model that is generated as a function of a large training dataset, para. 48] the trained ML model providing an indication [notify, para. 88] of whether the at least one instrumented component exhibits normal behavior or exhibits potential threat behavior; and
[observer module may continuously (or near continuously) monitor smart device behaviors by collecting behavior information from the instrumented component, Para. 37; when the classifier module 308 determines that a behavior, software application, or process is suspicious, the classifier module 308 may notify the behavior observer module 302, which may adjust the granularity of its observations, para. 88]
[0048] In an aspect, the network server 140 may be configured to generate a classifier model. The full classifier model may be a robust data model that is generated as a function of a large training dataset, which may include thousands of features and billions of entries. In an aspect, the network server 116 may be configured to generate the full classifier model to include all or most of the features, data points, and/or factors that could contribute to the degradation of any of a number of different makes, models, and configurations of smart devices 102. In various aspects, the network server may be configured to generate the full classifier model to describe or express a large corpus of behavior information as a finite state machine, decision nodes, decision trees, or in any information structure that can be modified, culled, augmented, or otherwise used to quickly and efficiently generate leaner classifier models.
Salajegheh [0088] When the classifier module 308 determines that a behavior, software application, or process is suspicious, the classifier module 308 may notify the behavior observer module 302, which may adjust the granularity of its observations (i.e., the level of detail at which smart device behaviors are observed) and/or change the behaviors that are observed based on information received from the classifier module 308 (e.g., results of the real-time analysis operations), generate or collect new or additional behavior information, and send the new/additional information to the behavior analyzer module 304 and/or classifier module 308 for further analysis/classification.
communicating the indication.
[classifier module 308 may notify the behavior observer module 302, para. 88]
However, Salajegheh does not expressly disclose open source software (OSS), such as:
accessing data regarding execution of at least one instrumented open source software (OSS) component of an application, wherein the at least one instrumented OSS component is instrumented by an instrumentation tool;
processing the data by a trained machine learning (ML) model, the trained ML model providing an indication of whether the at least one instrumented OSS component exhibits normal behavior or exhibits potential threat behavior;
Muddu discloses accessing data regarding execution of open-source software
12:65-13:1 (150) Above the virtualization layer 104, a software framework layer 106 implements the software services executing on the virtualization layer 104. Examples of such software services include open-source software such as Apache Hadoop™,
8:19-36 In general, “machine data” as used herein includes timestamped event data, as discussed further below. Examples of components that may generate machine data from which events can be derived include: web servers, application servers, databases, firewalls, routers, operating systems, and software applications that execute on computer systems, mobile devices, sensors, Internet of Things (IoT) devices, etc. The data generated by such data sources can include, for example, server log files, activity log files, configuration files, messages, network packet data, performance measurements, sensor measurements, etc., which are indicative of performance or operation of a computing system in an information technology environment.
11:12-23 (141) In this description the term “event data” refers to machine data related to activity on a network with respect to an entity of focus, such as one or more users, one or more network nodes, one or more network segments, one or more applications, etc.). In certain embodiments, incoming event data from various data sources is evaluated in two separate data paths: (i) a real-time processing path and (ii) a batch processing path. Preferably, the evaluation of event data in these two data paths occurs concurrently. The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats
13:35-39 (153) FIG. 3 shows a high-level conceptual view of the processing within security platform 102 in FIG. 2. A receive data block 202 represents a logical component in which event data and other data are received from one or more data sources.
20:6-10 (187) Events occurring in a computer network may belong to different event categories (e.g., a firewall event, a threat information, a login event) and may be generated by different machines (e.g., a Cisco™ router, a Hadoop™ Distributed File System (HDFS) server, or a cloud-based server such as Amazon Web Services™ (AWS) CloudTrail™).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Salajegheh with the technique for accessing data regarding execution of open source software of Muddu to include open source software (OSS):
accessing data regarding execution of at least one instrumented open source software (OSS) component of an application, wherein the at least one instrumented OSS component is instrumented by an instrumentation tool;
processing the data by a trained machine learning (ML) model, the trained ML model providing an indication of whether the at least one instrumented OSS component exhibits normal behavior or exhibits potential threat behavior;
One of ordinary skill in the art would have made this modification to improve the ability of the system to access data regarding execution of open-source software. Since some applications, such as Web servers, are based on execution of open-source software, it would be advantageous to access the execution data of such open-source software to determine whether there are any execution abnormalities that indicate potential threats, in order to facilitate implementation of security measures.
As per claim 2, the rejection of claim 1 is incorporated herein.
Salajegheh discloses (at para. 67) the method further comprising generating, by the instrumentation tool, the data [collecting behavior information from the instrumented components ]regarding execution of the at least one instrumented component.
[0067] The behavior observer module 302 may be configured to instrument or coordinate various APIs, registers, counters or other components (herein collectively “instrumented components”) at various levels of the smart device system, and continuously (or near continuously) monitor smart device behaviors over a period of time and in real-time by collecting behavior information from the instrumented components as well as those of other. For example, the behavior observer module 302 may monitor library API calls, system call APIs, driver API calls, and other instrumented components by reading information from log files (e.g., API logs, etc.) stored in a memory of the smart device 102.
However, Salajegheh does not expressly disclose the method further comprising generating, by the instrumentation tool, the data regarding execution of the at least one OSS component.
Muddu discloses accessing data regarding execution of open-source software
(see rejection of claim 1)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Salajegheh with the technique for accessing data regarding execution of open source software of Muddu to include the method further comprising generating, by the instrumentation tool, the data regarding execution of the at least one OSS component.
One of ordinary skill in the art would have made this modification to improve the ability of the system to access data regarding execution of open-source software. Since some applications, such as Web servers, are based on execution of open-source software, it would be advantageous to access the execution data of such open-source software to determine whether there are any execution abnormalities that indicate potential threats, in order to facilitate implementation of security measures.
As per claim 3, the rejection of claim 1 is incorporated herein.
Salajegheh discloses wherein the data regarding execution of the at least one instrumented component comprises at least one of: which routines are called, memory settings, execution order, [monitor library API calls, system call APIs, driver API calls, and other instrumented components, para. 67; monitoring (i.e., via the behavior observer module 302) a number of features, factors, data points, entries, APIs, states, conditions, behaviors, applications, processes, operations, para. 84] or exceptions raised.
[0067] The behavior observer module 302 may be configured to instrument or coordinate various APIs, registers, counters or other components (herein collectively “instrumented components”) at various levels of the smart device system, and continuously (or near continuously) monitor smart device behaviors over a period of time and in real-time by collecting behavior information from the instrumented components as well as those of other. For example, the behavior observer module 302 may monitor library API calls, system call APIs, driver API calls, and other instrumented components by reading information from log files (e.g., API logs, etc.) stored in a memory of the smart device 102.
Salajegheh [0067] The behavior observer module 302 may be configured to instrument or coordinate various APIs, registers, counters or other components (herein collectively “instrumented components”) at various levels of the smart device system, and continuously (or near continuously) monitor smart device behaviors over a period of time and in real-time by collecting behavior information from the instrumented components as well as those of other. For example, the behavior observer module 302 may monitor library API calls, system call APIs, driver API calls, and other instrumented components by reading information from log files (e.g., API logs, etc.) stored in a memory of the smart device 102.[0084] Each classifier model may also include decision criteria for monitoring (i.e., via the behavior observer module 302) a number of features, factors, data points, entries, APIs, states, conditions, behaviors, applications, processes, operations, components, etc. (collectively referred to as “features”) in the smart device 102. Classifier models may be preinstalled on the
However, Salajegheh does not expressly disclose wherein the data regarding execution of the at least one instrumented OSS component comprises at least one of: which routines are called, memory settings, execution order, or exceptions raised.
Muddu discloses accessing data regarding execution of open-source software
(see rejection of claim 1)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Salajegheh with the technique for accessing data regarding execution of open source software of Muddu to include wherein the data regarding execution of the at least one instrumented OSS component comprises at least one of: which routines are called, memory settings, execution order, or exceptions raised.
One of ordinary skill in the art would have made this modification to improve the ability of the system to access data regarding execution of open-source software. Since some applications, such as Web servers, are based on execution of open-source software, it would be advantageous to access the execution data of such open-source software to determine whether there are any execution abnormalities that indicate potential threats, in order to facilitate implementation of security measures.
As per claim 4, the rejection of claim 3 is incorporated herein.
Salajegheh discloses
wherein processing the data by the trained ML model comprises inputting, to the trained ML, at least one of: which routines are called, memory settings, execution order, [monitor library API calls, system call APIs, driver API calls, and other instrumented components, para. 67; monitoring (i.e., via the behavior observer module 302) a number of features, factors, data points, entries, APIs, states, conditions, behaviors, applications, processes, operations, para. 84] or exceptions raised.
[see citations for claim 3]
As per claim 6, the rejection of claim 1 is incorporated herein.
However, Salajegheh does not expressly disclose further comprising performing continual learning for the trained ML model using new input training data.
Muddu discloses retraining a model based on additional event feature sets
48:53-56 (312) At step 2004, the model training process thread continuously retrains the model state as the group-specific data stream provides additional event feature sets.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Salajegheh with the technique for retraining a model based on additional event feature sets of Muddu to include further comprising performing continual learning for the trained ML model using new input training data.
One of ordinary skill in the art would have made this modification to improve the ability of the system to improve the performance of the learned model by retraining the model based on additional event feature sets. The system of the primary reference can be modified to retrain the model based on additional event feature sets in order to improve the model performance.
As per claim 7, the claim(s) is/are directed to a system with limitations which correspond to limitations of claim 1, and is/are rejected for the reasons detailed with respect to claim 1. Claim 7 also recites A system comprising: at least one processor; and one or more memory storing instructions which, when executed by the at least one processor, cause the system at least to:
Salajegheh discloses A system comprising: at least one processor; and one or more memory storing instructions which, when executed by the at least one processor, cause the system at least to:
[0130] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more processor-executable instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
Salajegheh [0017] FIGS. 1A-C are communication system block diagrams illustrating network components of an example smart device system and associated networking framework that are suitable for use with the various aspects.
[0015] Aspects may include a communication system of multiple devices configured with processor-executable instructions to perform operations of one or more of the aspect methods described above.
[0066] Each of the modules 302-210 may be a thread, process, daemon, module, sub-system, or component that is implemented in software, hardware, or a combination thereof. In various aspects, the modules 302-210 may be implemented within parts of the operating system (e.g., within the kernel, in the kernel space, in the user space, etc.), within separate programs or applications, in specialized hardware buffers or processors, or any combination thereof. In an aspect, one or more of the modules 302-210 may be implemented as software instructions executing on one or more processors of the smart device 102.
As per claim 8, the claim(s) is/are directed to a system with limitations which correspond to limitations of claim 2, and is/are rejected for the reasons detailed with respect to claim 2.
As per claim 9, the claim(s) is/are directed to a system with limitations which correspond to limitations of claim 3, and is/are rejected for the reasons detailed with respect to claim 3.
As per claim 10, the claim(s) is/are directed to a system with limitations which correspond to limitations of claim 4, and is/are rejected for the reasons detailed with respect to claim 4.
As per claim 12, the claim(s) is/are directed to a system with limitations which correspond to limitations of claim 6, and is/are rejected for the reasons detailed with respect to claim 6.
As per claim 13, the claim(s) is/are directed to a processor-readable medium with limitations which correspond to limitations of claim 1, and is/are rejected for the reasons detailed with respect to claim 1. Claim 13 also recites A non-transitory processor-readable medium storing instructions which, when executed by at least one processor of a system, causes the system at least to perform:
Salajegheh discloses A processor-readable medium storing instructions which, when executed by at least one processor of a system, causes the system at least to perform:
[See citations for claim 7]
As per claim 14, the claim(s) is/are directed to a processor-readable medium with limitations which correspond to limitations of claim 2, and is/are rejected for the reasons detailed with respect to claim 2.
As per claim 15, the claim(s) is/are directed to a processor-readable medium with limitations which correspond to limitations of claim 3, and is/are rejected for the reasons detailed with respect to claim 3.
As per claim 16, the claim(s) is/are directed to a processor-readable medium with limitations which correspond to limitations of claim 4, and is/are rejected for the reasons detailed with respect to claim 4.
As per claim 18, the claim(s) is/are directed to a processor-readable medium with limitations which correspond to limitations of claim 6, and is/are rejected for the reasons detailed with respect to claim 6.
Claims 5, 11, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salajegheh in view of Muddu, further in view of Kels et al. U.S. Publication 20220201023 (hereinafter “Kels”).
As per claim 5, the rejection of claim 1 is incorporated herein.
However, the combination of Salajegheh and Muddu does not expressly disclose wherein the trained ML model comprises a neural network trained by supervised learning. Kels discloses a neural network trained by supervised learning
[0048] At block 312, embodiments may utilize the aggregated information to train a machine-learning model to predict a probability that a device, which is still alive and connected to a network but has stopped reporting information, is dysfunctional. As described herein, anomaly detection logic may comprise one or more machine-learning models trained to detect anomalous behavior or device dysfunction. It is contemplated that any suitable supervised machine-learning model or algorithm such as, but not limited to, a neural network, logistic regression, decision tree (which may comprise a decision tree ensemble or random forest), or a Naïve Bayes classifier, may be utilized by anomaly detection logic to determine likelihood of device dysfunction. Training data utilized in supervised learning to train the model(s) or logic may include labeled training data, which may be derived from previous, historical incidents or simulated incidents. As a result of having a trained machine-learning model, embodiments are able to learn the normal reporting patterns of devices within a network which enables the trained machine-learning model to efficiently and accurately predict when a device is dysfunctional or exhibiting abnormal behavior.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Salajegheh and Muddu with the teaching of a neural network trained by supervised learning of Kels to include wherein the trained ML model comprises a neural network trained by supervised learning.
One of ordinary skill in the art would have made this modification to improve the ability of the system to utilize a neural network trained by supervised learning. The system of the primary reference can be modified to utilize a neural network trained by supervised learning, in order to provide faster and more stable training based on the labeled data, and to facilitate adaptability and scalability as allowed by neural networks.
As per claim 11, the claim(s) is/are directed to a system with limitations which correspond to limitations of claim 5, and is/are rejected for the reasons detailed with respect to claim 5.
As per claim 17, the claim(s) is/are directed to a processor-readable medium with limitations which correspond to limitations of claim 5, and is/are rejected for the reasons detailed with respect to claim 5.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOWARD H LOUIE whose telephone number is (571)272-0036. The examiner can normally be reached on Monday-Friday 9 AM-5 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung W. Kim can be reached on 571-272-3804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HOWARD H. LOUIE/Examiner, Art Unit 2494
/ROBERT B LEUNG/Primary Examiner, Art Unit 2494
1 Emphasis is additional throughout.