Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. Claims 1-30 were pending. Each of claims 1-3, 5-7, 9, 11, 19-21, 23- 25, 27, 29-32 have been amended and claims 33-41 have been added. Accordingly, claims 1-41 are pending after inclusion of the present amendments and newly added claims.
3. 18/426,058 is DIV of 17/304096.
4. This office action is in response to the REM filed 11/26/2025.
5. Claims 1, 11 and 19 are independent claims.
6. The office action is made Final.
Information Disclosure Statement
7. The information disclosure statement (IDSs) submitted on 11/26/2025 was considered by the examiner.
Claim Rejections – 35 USC § 101
8. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
9. Claims 1-41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03.
Claim 1 is directed to a non-transitory computer readable storage media, which is a manufacture, and thus a statutory category of invention (Step 1: YES).
Claims 11 is directed to a system claim, so product/apparatus claims), and falls within one of the statutory categories of invention. (Step 1: YES).
Claim 19, recites the steps or acts…, and thus is a process (a series of steps or acts). A process is a statutory category of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
Claim 1, recite in part the steps: “provide a thread to monitor one or more processes within the message bus”.
Claim 11, recite in part the steps: “an embedded service configured to provide a thread to monitor one or more processes within the message bus”.
Claim 19, recite in part the steps:” providing a thread, by the embedded service, to monitor one or more processes within the message bus”.
“Monitoring” is considered an abstract idea under patent law, monitoring is considered as a mental process, Therefore, those steps fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III.
Claims 1, 11 and 19 appear to recite an abstract idea as a form of mental process. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
Claim 1 recites the additional elements of “receive events from a distributed file server”, “store metadata and event data associated with the distributed file server” and “based on receipt of a call from the monitoring service, provide, by the embedded service, processor utilization, memory utilization, or both, of the message bus”.
Claim 11 recites the additional elements of: “receive event data via a message bus based on events in the distributed file server; a metadata process configured to scan at least one snapshot of the distributed file server to obtain metadata regarding at least one of the files” and “receive, by the embedded service based on receipt of a call from the monitoring service, processor utilization, memory utilization, or both, of monitor the message bus and the events pipeline”.
Claim 19 recites the additional elements of: “receiving, by a message bus comprising an embedded service, events from a distributed file server”, “storing, by an analytics data engine comprising a monitoring service in communication with the embedded service, metadata and event data associated with the distributed file server” and “based on receipt of a call from the monitoring service, providing, by the embedded service, processor utilization, memory utilization, or both of the message bus”.
‘Receiving event data” is considered as an insignificant extra-solution activity (pre-solution activity of receiving and gathering data)
“Scanning”, it's merely collecting/analyzing data or visualizing information without a specific technological improvement or inventive application are considered as an insignificant extra-solution activity.
“Storing data” is generally considered an "insignificant extra-solution activity" when it involves only well-understood, routine, and conventional computer functions.
“Providing, processor utilization, memory utilization, or both of the message bus” is considered as an insignificant extra-solution activity (post-solution activity of outputting result).
Claims 1, 11 and 19 recite the additional elements of: “At least one non-transitory computer readable storage media, a message bus comprising an embedded service, a distributed file server, an analytics data engine/system”.
recited at a high level of generality and amount to no more than mere instructions to apply the abstract idea on a computer, which as per MPEP 2106.05(f) does not provide integration into a practical application.
Claims 1, 11 and 19 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements amount to no more than mere instructions to apply the exception using a generic computer component and insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claims are directed to the judicial exception. (Step 2A: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amount to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As explained with respect to Step 2A, Prong Two, the additional elements were found to be mere instructions to apply the exception using a generic computer component and insignificant extra-solution activity in Step 2A, Prong Two.
However, a conclusion that an additional element is insignificant extra-solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g).
Here, the additional elements are mere data gathering/presenting results that is recited at a high level of generality, and as discussed in the disclosure, is well-understood. Therefore, this limitation remains insignificant extra-solution activity even upon reconsideration and does not amount to significantly more.
Even when considered in combination, these additional elements represent insignificant extra-solution activity, which do not provide an inventive concept. (Step 2B: NO). The claims are not eligible.
The dependent claims merely incorporate additional elements that narrow the abstract idea without yielding an improvement to any technical field, the computer itself, or limitations beyond merely linking the idea to a particular technological environment.
Claims 2, 3, 5-7, 9, 23, 25, 27, 29, 31 and 32 (mental process grouping of abstract idea).
Claims 4, 16, 18 and 22, 34, 35, 37, 38, 40 and 41 (mere instructions to implement an abstract idea), and
Claims 8, 10, 12-15, 17, 21, 24, 26, 28, 33, 36 and 39 (insignificant extra-solution activity).
Examiner Note
10. The Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Claim Rejections - 35 USC § 102
11. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
12. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) The claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention;
13. Claims 1-5, 7, 9, 10, 19-23, 25, 27-29, 32, 33-35 and 39-41 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Muddu et al (US 20170063907 A1) hereinafter as Muddu.
14. Regarding claims 1-5, 7, 9, 10, 29 and 33-35, those at least one non-transitory computer readable storage media encoded with instructions which, when executed, cause a system to performs the method of claims 19-23, 25, 27-28, 32 and 39-41 respectively and are rejected under the same rationale.
15. Regarding claim 19, Muddu teaches A method comprising:
receiving, by a message bus comprising an embedded service, events from a distributed file server ([0189], “in many instances, machine data can be more than mere logs—it can include configurations, data from APIs, message queues, change events, the output of diagnostic commands, call detail records, sensor data from industrial systems, and so forth.”, Fig 8, [0193], [0195], “Embodiments of the data connectors 802 (a message bus) can provide support for accessing/receiving indexed data, unindexed data (e.g., data directly from a machine at which an event occurs), data from a third-party provider (e.g., threat feeds such as Norse™, or messages from AWS™ CloudTrail™), or data from a distributed file system (e.g., HDFS™).”, [0202], “The data connectors 802 (a message bus) can implement various techniques (an embedded service) to obtain machine data from the data sources…the data connectors 802 can adopt a pull mechanism, a push mechanism, or a hybrid mechanism (an embedded service). For those data sources (e.g., a query-based system, such as Splunk®) that use a pull mechanism, the data connectors 802 actively collect the data by issuing suitable instructions to the data sources to grab data from those data sources into the security platform… the data connectors 802 can receive from the data source a notification of a new event, acknowledges the notification, and at a suitable time communicate with the data source to receive the event.”, [0205], “the data connectors 802 (a message bus) obtain/receive the data… the format detector 804 can embed regular expression rules and/or statistical rules in performing the format detection.”, [0220-0221], “The messaging system (e.g., Apache Kafka™) (a message bus)”, [0278], “the messaging platform 1518 can be Apache Kafka, an open-source message broker utilizing a publish-subscribe messaging protocol. For example, the messaging platform 1518 can deliver (e.g., via self-triggered interrupt messages or message queues) the event feature sets from the unbounded stream 1502 to model-related process threads (e.g., one or more of model training process threads, model deliberation process threads, and model preparation process threads) running in the distributed computation system 1520.”, [0287], “the messaging platform 1518 (a message bus) can be Apache Kafka, an open-source message broker utilizing a publish-subscribe messaging protocol.”, [0327], “exporting the result of identity resolution (e.g., into Redis™), exporting the time-series data (e.g., into OpenTSDB™), or pushing the anomalies raised by the batch event processing engine into a messaging system (e.g., Kafka™).”);
In line with Applicant Pre-Grant Pub:
[0074] the analytics VM 170 may include three containers – (1) a message bus (e.g., Kafka server), (2) an analytics data engine (e.g., Elastic Search), and (3) an API server, which may host various processes. During operation, the analytics VM 170 may perform multiple functions related to information collection, including a metadata collection process to receive metadata associated with the file system, a configuration information collection process to receive configuration and user information from the VFS 160, and an event data collection process to receive event data from the VFS 160.
[0140] In order to facilitate monitoring without unduly disrupting service operation, services running on the analytics system (e.g., analytics VM 270) may have an embedded remote procedure call (RPC) service. The embedded RPC service may, for example, provide a separate thread for the service that is monitoring the health of the main process thread. In some examples, the separate monitoring thread may collect particular health information – e.g., number of connections, number of requests being services, CPU utilization, and memory utilization. The monitoring service 288 may call the embedded RPC service in the processes to obtain monitoring information in some examples. This may minimize and/or reduce disruption to the operation of the services. Accordingly, the monitoring service 288 may make API calls to some services to obtain monitoring information, and may make calls to embedded RPC services for other components.
[0155] The message broker 314 may, for example, be implemented using a broker which may be hosted on a software bus, e.g., a Kafka server. The message broker may store and/or process messages according to topics. Each topic may be associated with a number of partitions, with a higher number of partitions corresponding to a faster possible rate of data processing.
storing, by an analytics data engine comprising a monitoring service in communication with the embedded service, metadata and event data associated with the distributed file server (Fig 4, [0169], “analysis module 330”, [0171-0172], “The event data that underlies those notifications or that gives rise to the detection made by the analysis module 330 (an analytics data engine) are persistently stored in a database 378.”, [0176], Fig 8, [0195], “storing/indexing event data associated with the distributed file system (e.g., HDFS™)”);
providing a thread, by the embedded service, to monitor one or more processes within the message bus ([0141], “mentoring events that occur on the cloud-based servers”, [0147], “The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats.”, [0320], “the model deliberation process thread can decommission itself at step 2116. In some embodiments, a separate process thread can perform steps 2114 and 2116 by externally monitoring the health status of the model deliberation process thread.”, [0184], [0188], “monitor user actions and interactions, and to derive other insights like user behavior baseline, anomalies and threats.”, [0438]): and
based on receipt of a call from the monitoring service, providing, by the embedded service, processor utilization, memory utilization, or both of the message bus ([0272], “memory consumption and processing power requirement”, [0430], “The computer system can further have a memory storage size limit. Once the size of the data structures representing the edges of the composite relationship graphs stored in the memory exceeds the memory storage size limit, the computer system transfers the data structures currently in the memory of the computer system to the persistent storage (monitoring, memory utilization).”, [0521], “The “memory capacity” of the PST model can be controlled by the maximum length of historic symbols, which is the probabilistic suffix tree's depth, and is the length of the Markov chain.”).
15. Regarding claim 20, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein the embedded service further monitors whether an events pipeline used to receive the event data is operating ([0141], “mentoring events that occur on the cloud-based servers”, [0147], “The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats.”, [0184], [0188], “monitor user actions and interactions, and to derive other insights like user behavior baseline, anomalies and threats.”, [0438], [0190], “the data intake and preparation stage includes an ETL engine/pipeline”, [0292], “The ML-based CEP engine 1500 can provide (e.g., stream via a data pipeline) the selected and formatted event feature sets to a model-related process thread of the model type 1602.”, [0357], “event data 2302 is received by a security platform from a plurality of entities associated with the computer network via an ETL pipeline.”).
16. Regarding claim 21, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein embedded service further provide an application programming interface (API) call to the message bus ([0173], “The access layer 364 includes the APIs for accessing the various databases and the user interfaces in the UI 350. For example, block 366A represents the API for accessing the HBase or HDFS (Hadoop File Service) databases. Block 366B represents the various APIs compatible for accessing servers implementing sockets.io or node.js servers. SQL API 366C represents the API for accessing the SQL data store 378, which stores data pertaining to the detected threats and anomalies.”, [0189], “machine data can be more than mere logs—it can include configurations, data from APIs, message queues, change events, the output of diagnostic commands, call detail records, sensor data from industrial systems, and so forth.” [0722], “The example event can be a “cloudtrail” event 8105, which is an event representative of application programming interface (API) calls for a web service.”).
17. Regarding claim 22, Muddu teaches the invention as claimed in claim 21 above and further teaches wherein the message bus comprises a Kafka server ([0220-0221], [0287], “Apache Kafka”).
18. Regarding claim 23, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein the embedded service further monitors a software stack from an infra layer to an application layer ([0155]).
19. Regarding claim 25, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein the embedded service further monitors the processor utilization, the memory utilization, or both of the analytics data engine, including at least one process operating within the analytics data engine ([0272], “memory consumption and processing power requirement”, [0430]).
20. Regarding claim 27, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein the thread further monitors resource usage of at least one process ([0202], “it is preferable to reduce data movement so as to conserve network resources”, [0232], “human resource management system (HRMS)”, [0272], [0282], “resource management systems”, [0668]).
21. Regarding claim 28, Muddu teaches the invention as claimed in claim 27 above and further teaches the method further comprising raising an alert based on a comparison of the resource usage with a threshold for longer than a threshold time ([0171], [0523], [0542]).
22. Regarding claim 32, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein the embedded service is further configured to monitor the processor utilization, the memory utilization, or both, of an application programming interface (API) server, including monitoring at least one process operating within the API server ([0159], [0173], [0189]).
23. Regarding claim 39, Muddu teaches the invention as claimed in claim 19 above and further teaches receiving, by an application programming interface (API) server, a query for an analytics report associated with the distributed filer server; and providing, by the API server, metrics regarding the distributed file server based at least in part on the metadata and event data ([0159], “ETL”, [0202-0203], [0325]).
24. Regarding claim 40, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein the thread provided by the embedded service is a separate from a thread that receives event data or metadata ([0395], “model-specific process threads”, [0301]).
25. Regarding claim 41, Muddu teaches the invention as claimed in claim 19 above and further teaches wherein one or more of the processes in the message bus comprise metadata collection process, configuration information collection process, event data collection process, or combinations thereof ([0164], “data receivers 310, which implement various APIs and connectors to receive (or retrieve, depending on the mechanism) the event data for the security platform 300.”).
Claim Rejections - 35 USC § 103
26. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
27. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
28. Claims 6, 8, 11-18, 24, 26, 30, 31 and 36-38 are rejected under 35 U.S.C.103 as being unpatentable over Muddu et al (US 20170063907 A1) hereinafter as Muddu in view of Arikatla et al (US 20180157752 A1) hereinafter as Arikatla.
29. Regarding claim 11, Muddu teaches A system comprising:
a distributed file server hosting files across multiple computing nodes ([0283], [0412], “the Hadoop distributed file system (HDFS)”); and
an analytics system (Fig 1, [0137], “a data processing and analytics system (and, as a particular example, a security platform) that employs a variety of techniques and mechanisms for anomalous activity detection in a networked environment in ways that are more insightful and scalable than the conventional techniques.”, Fig 4, [0170], “analysis module 330”),
the analytics system comprising: an events pipeline configured to receive event data via a message bus based on events in the distributed file server ([0141], “mentoring events that occur on the cloud-based servers”, [0147], “The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats.”, Fig 4, [0170], [0176], [0184], [0188], “monitor user actions and interactions, and to derive other insights like user behavior baseline, anomalies and threats.”, [0438], [0190], “the data intake and preparation stage includes an ETL engine/pipeline”, [0189], “in many instances, machine data can be more than mere logs—it can include configurations, data from APIs, message queues, change events, the output of diagnostic commands, call detail records, sensor data from industrial systems, and so forth.”, Fig 8, [0195], “Embodiments of the data connectors 802 can provide support for accessing/receiving indexed data, unindexed data (e.g., data directly from a machine at which an event occurs), data from a third-party provider (e.g., threat feeds such as Norse™, or messages from AWS™ CloudTrail™), or data from a distributed file system (e.g., HDFS™).”, [0191], “Events occurring in a computer network may belong to different event categories (e.g., a firewall event, a threat information, a login event) and may be generated by different machines (e.g., a Cisco™ router, a Hadoop™ Distributed File System (HDFS) server, or a cloud-based server such as Amazon Web Services™ (AWS) CloudTrail™).”, [0287], “the messaging platform 1518 can deliver (e.g., via self-triggered interrupt messages or message queues (a message bus)) the event feature sets from the unbounded stream 1502 to model-related process threads (e.g., one or more of model training process threads, model deliberation process threads, and model preparation process threads) running in the distributed computation system 1520.”, [0292], “The ML-based CEP engine 1500 can provide (e.g., stream via a data pipeline) the selected and formatted event feature sets to a model-related process thread of the model type 1602.”, [0357], “event data 2302 is received by a security platform from a plurality of entities associated with the computer network via an ETL pipeline.”), [0704]);
a metadata process configured to obtain metadata regarding at least one of the files (Fig 3, [0159], “An ETL block 204 is the data preparation component in which data received from the receive data block 202 is pre-processed, for example, by adding data and/or metadata to the event data (a process interchangeably called decoration, enrichment or annotation herein), or otherwise prepared, to allow more effective consumption by downstream data consumers (e.g., machine learning models).”, Fig 4, [0169], “analysis module 330”, [0171-0172], “The event data that underlies those notifications or that gives rise to the detection made by the analysis module 330 (an analytics data engine) are persistently stored in a database 378.”, [0176], Fig 8, [0195], “storing/indexing event data associated with the distributed file system (e.g., HDFS™)”);
an embedded service configured to provide a thread to monitor one or more processes within the message bus ([0141], “mentoring events that occur on the cloud-based servers”, [0147], “The real-time processing path is configured to continuously monitor and analyze the incoming event data (e.g., in the form of an unbounded data stream) to uncover anomalies and threats.”, [0320], “the model deliberation process thread can decommission itself at step 2116. In some embodiments, a separate process thread can perform steps 2114 and 2116 by externally monitoring the health status of the model deliberation process thread.”, [0184], [0188], “monitor user actions and interactions, and to derive other insights like user behavior baseline, anomalies and threats.”, [0438]);
a monitoring service configured to communicate with the embedded service and receive, by the embedded service based on receipt of a call from the monitoring service, processor utilization, memory utilization, or both, of the message bus and the events pipeline ([0272], “memory consumption and processing power requirement”, [0430], “The computer system can further have a memory storage size limit. Once the size of the data structures representing the edges of the composite relationship graphs stored in the memory exceeds the memory storage size limit, the computer system transfers the data structures currently in the memory of the computer system to the persistent storage (monitoring, memory utilization).”, [0521], “The “memory capacity” of the PST model can be controlled by the maximum length of historic symbols, which is the probabilistic suffix tree's depth, and is the length of the Markov chain.”).
Muddu did not specifically teach a metadata process configured to scan at least one snapshot of the distributed file server to obtain metadata regarding at least one of the files.
However, Arikatla teaches a metadata process configured to scan at least one snapshot of the distributed file server to obtain metadata regarding at least one of the files ([0070], “Metadata for attributes of storage items that is captured in the snapshot of the consistency group for the FSVMs may be stored in a database.”, [0143-0152], [0147] “Light weight snapshot includes share data delta, metadata and file-server configuration leading to less replication traffic across sites”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Arikatla’s system into Muddu’s and by incorporating Arikatla into Muddu because both systems are related to distributed and cloud computing systems would provide a transparent referral for distributed file servers.
30. Regarding claim 12, Muddu and Arikatla teach the invention as claimed in claim 11 above and Muddu further teaches wherein the events pipeline is configured to receive event data based on events at the multiple computing nodes ([0147], Fig 4, [0163], [0179], [0186], [0194], event data from network/group/a distributed computer cluster of nodes).
31. Regarding claim 13, Muddu and Arikatla teach the invention as claimed in claim 11 above and Muddu further teaches wherein the events pipeline is configured to provide event data to a partition of an events processor ([0223], [0293], [0309], “partition a keyspace among a distributed set of nodes”, [0412], “The nodes and edges of the composite relationship can be partitioned based on the timestamps (from the event data) of the corresponding network activities. Each data file can be designated for storing nodes and edges for a particular time period.”).
32. Regarding claim 14, Muddu and Arikatla teach the invention as claimed in claim 11 above and Arikatla further teaches wherein the analytics system is configured to provide an analytics report for the distributed file server based on the metadata from the at least one snapshot and the event data ([0171-0174], [0193]).
33. Regarding claim 15, Muddu and Arikatla teach the invention as claimed in claim 11 above and Muddu further teaches the system further comprising an analytics data store configured to store the event data, the metadata, or a combination thereof, wherein the event data, the metadata, or a combination thereof, may each be stored with an index indicator (Fig 4, [0169], “analysis module 330”, [0171-0172], “The event data that underlies those notifications or that gives rise to the detection made by the analysis module 330 (an analytics data engine)are persistently stored in a database 378.”, [0176], Fig 8, [0195], “storing/indexing event data associated with the distributed file system (e.g., HDFS™)”).
34. Regarding claim 16, Muddu and Arikatla teach the invention as claimed in claim 15 above and Muddu further teaches wherein the event data, the metadata, or a combination thereof, may each be stored with an index indicator (Fig 4, [0169], “analysis module 330”, [0171-0172], “The event data that underlies those notifications or that gives rise to the detection made by the analysis module 330 (an analytics data engine)are persistently stored in a database 378.”, [0176], Fig 8, [0195], “storing/indexing event data associated with the distributed file system (e.g., HDFS™)”, [0207], “for a particular data source, the configuration file can identify, in the received data representing an event, which field represents a token that may correspond to a timestamp, an entity, an action, an IP address, an event identifier (ID), a process ID, a type of the event, a type of machine that generates the event, and so forth.”).
35. Regarding claim 17, Muddu and Arikatla teach the invention as claimed in claim 16 above and Muddu further teaches wherein the analytics data store may further be configured to generate one or more indices for the event data, the metadata, or a combination thereof based at least on the index indicator (Fig 4, [0169], “analysis module 330”, [0171-0172], “The event data that underlies those notifications or that gives rise to the detection made by the analysis module 330 (an analytics data engine)are persistently stored in a database 378.”, [0176], Fig 8, [0195], “storing/indexing event data associated with the distributed file system (e.g., HDFS™)”).
36. Regarding claim 18, Muddu and Arikatla teach the invention as claimed in claim 17 above and Muddu further teaches wherein each index of the one or more indices may compromise an anomaly index, a capacity index, an audit log index, a unique user identification (ID) index, or combinations thereof (Fig 7A, 7B, 24, 25, [0138], “anomaly or threat summary”, [0452-0453], [0457], [0481]).
37. Regarding claim 24, Muddu teaches the invention as claimed in claim 19 above, Muddu did not specifically teach wherein the embedded service further provides a call to a remote procedure call (RPC) service embedded in at least one process.
However, Arikatla teaches the embedded service further provides a call to a remote procedure call (RPC) service embedded in at least one process ([0157], “the VFS layer makes an RPC call to a file server service running in PRISM CENTRAL to identify a location (which may be an optimal location) for ‘dir7’.”, [0164]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Arikatla’s system into Muddu’s and by incorporating Arikatla into Muddu because both systems are related to distributed and cloud computing systems would provide a transparent referral for distributed file servers.
38. Regarding claim 26, Muddu teaches the invention as claimed in claim 19 above, Muddu did not specifically teach displaying an indicator when at least one service is down, when resources usage of at least one monitored process within the message bus, beyond a threshold, or both.
However, Arikatla teaches displaying an indicator when at least one service is down, when resources usage of at least one monitored process within the message bus, beyond a threshold, or both ([0056], [0094-0095], [0193], [0143-0152], [0171-0174]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Arikatla’s system into Muddu’s and by incorporating Arikatla into Muddu because both systems are related to distributed and cloud computing systems would provide a transparent referral for distributed file servers.
39. Regarding claim 30, Muddu and Arikatla teach the invention as claimed in claim 11 above and Muddu further teaches the system further comprising an analytics data engine, wherein the embedded service is further configured to monitor the analytics data engine, including monitoring processor utilization, memory utilization, or both, of the analytics data engine, wherein said monitoring includes monitoring at least one process operating within the analytics data engine ([0272], “memory consumption and processing power requirement”, [0430], “The computer system can further have a memory storage size limit. Once the size of the data structures representing the edges of the composite relationship graphs stored in the memory exceeds the memory storage size limit, the computer system transfers the data structures currently in the memory of the computer system to the persistent storage (monitoring, memory utilization).”, [0521], “The “memory capacity” of the PST model can be controlled by the maximum length of historic symbols, which is the probabilistic suffix tree's depth, and is the length of the Markov chain.”).
40. Regarding claim 31, Muddu and Arikatla teach the invention as claimed in claim 11 above and Muddu further teaches the system further comprising an application programming interface (API) server, wherein wherein the embedded service is further configured to monitor processor utilization, memory utilization, or both, of the API server, wherein said monitoring includes monitoring at least one process operating within the API server ([0159], [0173], [0189]).
41. Regarding claim 36, Muddu and Arikatla teach the invention as claimed in claim 19 above and Muddu further teaches receiving, by an application programming interface (API) server, a query for an analytics report associated with the distributed filer server; and providing, by the API server, metrics regarding the distributed file server based at least in part on the metadata and event data ([0159], “ETL”, [0202-0203], [0325]).
42. Regarding claim 37, Muddu and Arikatla teach the invention as claimed in claim 19 above and Muddu further teaches wherein the thread provided by the embedded service is a separate from a thread that receives event data or metadata ([0395], “model-specific process threads”, [0301]).
43. Regarding claim 38, Muddu and Arikatla teach the invention as claimed in claim 19 above and Muddu further teaches wherein one or more of the processes in the message bus comprise metadata collection process, configuration information collection process, event data collection process, or combinations thereof ([0164], “data receivers 310, which implement various APIs and connectors to receive (or retrieve, depending on the mechanism) the event data for the security platform 300.”).
44. Regarding claims 6 and 8, those at least one non-transitory computer readable storage media encoded with instructions which, when executed, cause a system to performs the method of claims 24 and 26 respectively and are rejected under the same rationale.
Respond to Amendments and Arguments
45. Applicant's arguments received on 11/26/2025 regarding the eligibility for patenting under 35 U.S.C. § 101 have been fully considered but they are not persuasive.
46. In the remarks, Applicant respectfully submits Muddu in view of Arikatla fail to disclose the combination of recitations claimed. Each of claims 1-3, 5-7, 9, 11, 19-21, 23- 25, 27, 29-32 have been amended and claims 33-41 have been added. Applicant respectfully submit that the combination of references fails to teach or suggest each and every element in the combination of amended independent claim 1.
47. Applicant's arguments received on 11/26/2025 have been fully considered but they are not persuasive. Referring to the previous Office action, Examiner has cited relevant portions of the references as a means to illustrate the systems as taught by the prior art. As a means of providing further clarification as to what is taught by the references used in the first Office action, Examiner has expanded the teachings for comprehensibility while maintaining the same grounds of rejection of the claims, except as noted above in the section labeled “Status of Claims.” This information is intended to assist in illuminating the teachings of the references while providing evidence that establishes further support for the rejections of the claims.
CONCLUSION
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HICHAM SKHOUN whose telephone number is (571)272-9466. The examiner can normally be reached Normal schedule: Mon-Fri 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HICHAM SKHOUN/Primary Examiner, Art Unit 2164