DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is responsive to response filed on September 29th, 2025. In this office action:
Claims 1, 3-7, 9-13, and 15-18 are pending.
Claims 1, 3-7, 9-13, and 15-18 are rejected.
Summary of Previous Office Action
In the Non-Final Office Action mailed on March 27th, 2025:
Claims 1-8 and 13-14 were objected to because of informalities.
Claims 1-18 were rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention.
Claims 1, 7, and 13 were rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Pub. No. US 2021/0334186), hereinafter Chen; in view of Jueping (Pub. CN115037621A, published on 09/09/2022).
Claims 2-3, 5-6, 8-9, 11-12, 14-15, and 17-18 were rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Pub. No. US 2021/0334186), hereinafter Chen; in view of Jueping (Pub. CN115037621A, published on 09/09/2022); and further in view of Myla et al. (Patent No. US 11,405,261), hereinafter Myla.
Claims 4, 10, and 16 were rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Pub. No. US 2021/0334186), hereinafter Chen; in view of Jueping (Pub. CN115037621A, published on 09/09/2022); further in view of Myla et al. (Patent No. US 11,405,261), hereinafter Myla; and further in view of Mazzaferri et al. (Pub. No. US 2007/0174429), hereinafter Mazzaferri.
Response to Amendment
The amendments filed on September 29th, 2025 have been entered.
Claims 1, 3, 7, 9, 13, and 15 have been amended.
Claims 2, 8, and 14 have been canceled.
The previously raised claim objections are withdrawn for claims 2-8 and 13-14 in light of the amendments and partially maintained for claim 1, as presented in this Office Action (See Claim Objections section below).
The previously raised 35 U.S.C. 112(b) rejection is withdrawn for claims 1-6, 8 (canceled), and 13-18 in light of the amendments; and the previously raised 35 U.S.C. 112(b) rejection is maintained for claims 7 and 9-12, as presented in this Office Action (See 35 U.S.C. 112(b) Rejection section below).
Response to Arguments
Applicant Arguments/Remarks regarding Claim Rejection - 35 U.S.C. 103 (Pages 7-10), filed on September 29th, 2025, have been fully considered by the Examiner, but are not persuasive.
Applicant’s argument:
The Applicant argues the following:
... Amended claim 1 now recites "requesting, by the one or more collectors, process information repeatedly from the device for a predetermined process information period of time;" and "receiving, by the stream processor, the process information repeatedly at a predetermined
process information frequency from the one or more collectors."
Applicant respectfully submits that these amendments define a novel and non-obvious invention over the cited art. Amended claim 1 recites a specific, two-tiered monitoring and data collection method. First, a baseline monitoring of memory utilization is performed. Second, only if this baseline memory utilization crosses a threshold, a new sampling strategy is sent to the collectors. The amendments clarify that the overall method also involves a distinct and separate collection of process information, which is requested repeatedly for a "period of time" and received repeatedly at a "frequency."
The combination of Chen, Jueping, and Myla fails to teach this nuanced, event-driven approach. Chen discloses a system for identifying memory leaks. When a "severity score" is exceeded, Chen's system sends a "diagnosis command" to the agents to collect "additional memory usage data" (Office Action, pg. 3, citing Chen [0048-0054]). Chen's command is a reactive measure to gather more of the same type of data (memory usage) to diagnose a specific problem.
Applicant submits that Myla is an improper reference because it solves a completely different technical problem using a completely different technical solution. Myla's collector device simply receives and analyzes telemetry data for a general purpose: determining the device's overall health and status. The central problem addressed by Myla is bandwidth optimization for routine, continuous data exporting. Myla's "collector device" is a passive data sink; it simply receives telemetry data that is pushed to it by the "network device." The intelligent actor in Myla, the component that determines timing, packages the data, and initiates the transfer, is the network device itself, not the collector device.
The claimed stream processor, however, performs a much more specific and sophisticated task. It actively monitors an incoming data stream to detect when a specific metric (average memory utilization) crosses a predefined threshold. This threshold-crossing event is a critical trigger for a subsequent action. Myla's device is a passive data aggregator for general analysis; the claimed stream processor is an active, event-driven decision-making engine. Therefore, because Myla's "collector device" does not actively request data like the claimed collector and does not perform event-driven, threshold-based monitoring like the claimed stream processor, it does not teach or suggest the claimed invention.
Examiner’s response to Argument:
The Examiner respectfully disagrees.
Chen discloses requesting, by the one or more collectors, process information repeatedly from the device for a predetermined process information period of time (See Parag. [0038-0040]; the agents 304 (one or more collectors) may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302 (device). For example, the memory usage data 306 may include performance counters indicative of memory usage for specific processes … the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals (a predetermined process information period of time). Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data).
Chen doesn’t explicitly disclose receiving, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Myla discloses receiving, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors (See Col. 4 lines 60-67 and Col. 5 lines 1-9; network device (the one or more collectors) may determine a first time interval (a predetermined process information frequency) (e.g., every 2 seconds, every 5 seconds, every 10 seconds, every 30 seconds, and/or the like) to collect and send the delta values of the telemetry data to the collector device (stream processor). See Col. 2 lines 24-40; The collector device analyzes the telemetry data to determine a status of the network device, a health of the network device, and/or the like ... Examiner’s interpretation: The Examiner interpreted the network device, taught by Myla, as a collector as it collects and sends the telemetry data including information of the one or more resources of the network device at a particular time to the collector device. In addition, the Examiner interpreted the collector device, taught by Myla, as a stream processor as it receives and analyzes the telemetry data to determine a status of the network device and a health of the network device).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the memory leak management system (the stream processor), taught by Chen, to receive process information repeatedly at a predetermined process information frequency from the one or more collectors, as taught by Myla. This would be convenient to maintain and/or improve a performance of the network device (Myla, See Col. 2 lines 24-40).
In response to applicant’s argument that Myla is an improper reference because it solves a completely different technical problem using a completely different technical solution, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Chen’s art is in the area related to agent(s) (interpreted as claimed one or more collectors), which collect process related performance counters associated with specific processes, and provide performance counters to memory leak management system 106 (interpreted as claimed stream processor) for further analysis (Chen, See Parag. [0038-0040]). Myla teaches a collector device (interpreted as the claimed stream processor) that generates and sends a request for telemetry data (e.g., one or more counters) to a network device (interpreted as the claimed one or more collectors) that collects and sends the delta values of the telemetry data to the collector device, where the collector device analyses the received telemetry data (Myla, See Col. 1 lines 24-37 and Col. 4 lines 3-14). Both Chen and Myla are related to collecting performance data and send it to a stream processor for analysis.
Therefore, it would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the memory leak management system (the stream processor), taught by Chen, to receive process information repeatedly at a predetermined process information frequency from the one or more collectors, as taught by Myla. This would be convenient to maintain and/or improve a performance of the network device (Myla, See Col. 2 lines 24-40).
Regarding Applicant’s argument that “Myla's "collector device" is a passive data sink; it simply receives telemetry data that is pushed to it by the "network device." The intelligent actor in Myla, the component that determines timing, packages the data, and initiates the transfer, is the network device itself, not the collector device ... Myla's device is a passive data aggregator for general analysis; the claimed stream processor is an active, event-driven decision-making engine ... Myla's "collector device" does not actively request data,” the Examiner respectfully disagrees. Myla's "collector device" sends a request for telemetry data (Myla, See Col. 4 lines 3-14; the collector device may generate and send the request for telemetry data ... the collector device may send the request for the telemetry data after the network device has been connected to the network, after one or more settings of the network device have changed, and/or the like, for the predetermined quantity of time (e.g., after ten seconds, 30 seconds, one minute, ten minutes, and/or the like)); in addition, Myla's "collector device" performs monitoring, where Myla's "collector device" analyses the received telemetry data to make a determination about the status and health of the network device, and based on this determination, Myla's "collector device" sends instructions to the network device to maintain and/or improve performance (Myla, See Col. 1 lines 24-37). Therefore, Myla's "collector device" processes the received telemetry data to make a determination; thus, Myla's "collector device" is a decision-making engine and not a passive data sink that simply receives telemetry data that is pushed to it, as argued by the Applicant.
Claim Objections
Claim 1 is objected to because of the following informality:
“sending a new sampling strategy to the one or more collector ...” should read (Examiners’ suggestion) “sending a new sampling strategy to the one or more collectors ...”
Claim 7 is objected to because of the following informality:
“one or more instructions that, when executed by one ore more processors ... cause one or more processors to:” should read (Examiners’ suggestion) “one or more instructions that, when executed by one or more processors ... cause the one or more processors to:”
Claim 13 is objected to because of the following informality:
“request, by the one or more collectors, the process information repeatedly from the device for a predetermined process information period of time; and receive, by the stream processor, process information repeatedly at a predetermined process information frequency from the one or more collectors” should read (Examiners’ suggestion) “request, by the one or more collectors, [[ process information repeatedly from the device for a predetermined process information period of time; and receive, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors”
Claim 15 is objected to because of the following informality:
“The system of claim 15 ...” should read (Examiners’ suggestion) “The system of claim 13 ...”
Appropriate correction(s) is/are required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7 and 9-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites:
“one or more instructions that, when executed by one or more processors of a device, cause one or more processors to:” and
“provide one or more collectors which periodically request memory utilization from a device”
It is unclear if “a device” in (A) is the same as “a device” in (B) or not. Therefore, the Examiner is unable to determine the metes and the bounds of the claim language. For examination purposes, the Examiner interprets “a device” in both (A) and (B) to be different devices, where “a device” in (A) represents a system including the one or more instructions to be executed by one or more processors to perform the teachings of claim 7.
In addition, claim 7 recites “requesting, by the one or more collectors, process information repeatedly from the device for a predetermined process information period of time.” It is unclear to which device in (A) and (B) “the device” is referring to. Therefore, the Examiner is unable to determine the metes and the bounds of the claim language.
For examination purposes, the Examiner interprets “the device” to refer to “a device” in (B) of Claim 7.
Claims 9-12 are rejected under 35 U.S.C. 112(b) as they depend on the rejected claim 7.
In addition, claim 10 recites:
“wherein the one or more instructions further cause the device to:” and
“periodically requesting memory utilization from the device, by the one or more collectors”
It is unclear to which device in (A) and (B) of Claim 7, as presented above, each of “the device” in (C) and (D) of Claim 10 are referring to. Therefore, the Examiner is unable to determine the metes and the bounds of the claim language.
For examination purposes, the Examiner interprets “the device” in (C) of Claim 10 to refer to “a device” in (A) of Claim 7; and “the device” in (D) of Claim 10 to refer to “a device” in (B) of Claim 7.
Examiner’s note:
To overcome the 35 U.S.C. 112(b) rejections for claims 7 and 9-12 with regards to “a device” in (A) and (B) of Claim 7, as presented above, the Examiner suggests:
Amend claim 7 to recite: “one or more instructions that, when executed by one or more processors ...”
Amend claim 10 to recite: “wherein the one or more instructions further cause the one or more processors to:”
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 5-7, 9, 11-13, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Pub. No. US 2021/0334186), hereinafter Chen; in view of Jueping (Pub. CN115037621A, published on 09/09/2022); and further in view of Myla et al. (Patent No. US 11,405,261), hereinafter Myla.
Claim 1. Chen discloses [a] method, comprising:
providing one or more collectors which periodically request memory utilization from a device (See Parag. [0038-0040]; agents 304 (one or more collectors) may refer to different types of agents that locally generate or sample a state of memory usage on the host node 302 (device) ... the agents 304 may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302; the memory usage data 306 may include performance counters indicative of memory usage for specific processes ... the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals. See also Fig. 6. Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data. Therefore, the Examiner interpreted one or more collectors periodically request memory utilization as the agents (monitoring agent) obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents may capture a current status of memory allocation at five-minute intervals);
receiving, by a stream processor, memory utilization from the one or more collectors (See Parag. [0038]; the agents 304 may refer to different types of agents that provide memory usage data 306 to the memory leak management system 106 (a stream processor) (e.g., aggregation system 202) for further analysis. See also Parag. [0015], Fig. 2, and Fig. 6);
monitoring, by the stream processor, if memory utilization evaluated over a predetermined time crosses a predetermined threshold (See Parag. [0045]; the data aggregator 204 can provide aggregated data 308 to a time interval manager 206 (within the memory leak management system 106) to identify time intervals (predetermined time) for the memory usage data 306. See Parag. [0048-0049]; … the data aggregator 204 applies an algorithm to the aggregated memory usage data to determine a dynamic severity score corresponding to a current status of the memory usage data. Over time, as the severity score increases, the data aggregator 204 may determine that the aggregated memory usage data results in a severity score over a threshold value (crosses a predetermined threshold) ... the time interval manager 206 can provide an indication of the time interval(s) 310 to the data aggregator 204 ... the data aggregator 204 can provide a subset of aggregated data 308 limited to memory usage data 306 for the identified time interval 310 to the diagnosis and mitigation system 210. See also Parag. [0092]; aggregating the memory usage data over one or more predetermined intervals (predetermined time) to determine a subset of host nodes from the plurality of host nodes predicted to have memory leaks based on memory usage data for the subset of host nodes satisfying one or more impact metrics. The one or more impact metrics may include one or more of a threshold increase in memory usage over a relevant time interval, a threshold increase in memory usage over a short duration of time ... See also Parag. [0043] [0046-0047] [0049] [0068] [0085-0088], Fig. 2, Fig. 3A-B, Fig. 5A-C, and Fig. 6);
sending data downstream to a data sink for persistence (See Parag. [0034]; The monitor and reporting system 216 (within the memory leak management system 106) may receive memory usage data and provide further analysis in connection with diagnosing and mitigating memory leaks on host nodes; the monitor and reporting system 216 can utilize third-party analysis systems to perform a more thorough analysis of diagnosis information and/or memory usage information to develop a detailed report including information associated with specific host nodes and/or processes that may be used in preventing or otherwise mitigating memory impact events across nodes of the cloud computing system. See Parag. [0035]; the memory leak management system 106 may include a data storage 218. The data storage 218 may include any information associated with respective host nodes and/or processes and services hosted by the respective host nodes. Examiner’s interpretation: The Examiner interpreted sending data downstream to a data sink for persistence as sending data downstream for a destination for storage);
sending a new sampling strategy to the one or more collector, if the memory utilization evaluated over the predetermined time crosses the predetermined threshold (See Parag. [0048-0054] and Fig. 3A-B; the data aggregator 204 provides the aggregated subset 312 (and the associated severity score) to the diagnosis and mitigation system 210 based on the severity score exceeding the threshold value … the diagnosis manager 212 can evaluate the aggregated subset 312 of memory usage data to determine a diagnosis command 314 including instructions for the host nodes associated with the aggregated subset 312 of memory usage data … the diagnosis command 314 may include an indication of a memory leak or other memory impact event as well as instructions indicating one or more diagnosis actions that the candidate node 322 can perform to enable the memory leak management system 106 to effectively monitor further memory usage data … In response to receiving the diagnosis command 314, agents 324 (the one or more collector) on the candidate node(s) 322 can collect additional memory usage data. In particular, each of multiple agents 324 can collect or sample different types of memory usage data; the agents 324 can sample memory usage data at predetermined intervals (e.g., every five minutes). See also Fig. 2. Examiner’s interpretation: The Examiner interpreted “sending a new sampling strategy to the one or more collector” as sending the diagnosis command to agent(s) to effectively monitor/collect additional memory usage data and perform new sampling using the additional memory usage data. In addition, the command is based on the severity score exceeding the threshold value over a time interval);
requesting, by the one or more collectors, process information repeatedly from the device for a predetermined process information period of time (See Parag. [0038-0040]; the agents 304 (one or more collectors) may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302 (device). For example, the memory usage data 306 may include performance counters indicative of memory usage for specific processes … the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals (a predetermined process information period of time). Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data).
Chen doesn’t explicitly disclose the evaluated memory utilization over a predetermined time is an average memory utilization; [and] receiving, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Jueping discloses monitoring, by the stream processor, if an average memory utilization evaluated over a predetermined time crosses a predetermined threshold (See Page 5, Lines 8-17 and 30-36; Get the CPU usage and memory usage of the controller; Define the time period T1 (predetermined time), and calculate the average memory usage and CPU usage in the previous T1 time period at the current time point; set the T1 time to 5 minutes … a third threshold (predetermined threshold) is set for the average usage rate of the memory, that is, within a certain period of time … when the average usage rate of the memory is greater than or equal to the third threshold, it indicates that the current resource load of the controller is relatively large).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the data aggregator determining if the aggregated memory usage data, over a predetermined interval, results in a severity score over a threshold value, taught by Chen, to monitor if an average memory utilization evaluated over a predetermined time crosses a predetermined threshold, as taught by Jueping. This would be convenient to indicate that the current resource load of the controller is relatively large; thus, no new network device audit operations are added (Jueping, See Page 5, Lines 30-36).
Chen in view of Jueping doesn’t explicitly disclose receiving, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Myla discloses receiving, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors (See Col. 4 lines 60-67 and Col. 5 lines 1-9; network device (the one or more collectors) may determine a first time interval (a predetermined process information frequency) (e.g., every 2 seconds, every 5 seconds, every 10 seconds, every 30 seconds, and/or the like) to collect and send the delta values of the telemetry data to the collector device (stream processor). See Col. 2 lines 24-40; The collector device analyzes the telemetry data to determine a status of the network device, a health of the network device, and/or the like ... Examiner’s interpretation: The Examiner interpreted the network device, taught by Myla, as a collector as it collects and sends the telemetry data including information of the one or more resources of the network device at a particular time to the collector device. In addition, the Examiner interpreted the collector device, taught by Myla, as a stream processor as it receives and analyzes the telemetry data to determine a status of the network device and a health of the network device).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the memory leak management system (the stream processor), taught by Chen in view of Jueping, to receive process information repeatedly at a predetermined process information frequency from the one or more collectors, as taught by Myla. This would be convenient to maintain and/or improve a performance of the network device (Myla, See Col. 2 lines 24-40).
Claim 3. Chen in view of Jueping and Myla discloses [t]he method of claim 1,
Chen discloses the method further comprising generating alerts, by the stream processor, and sending the generated alerts to the data sink (See Parag. [0081]; See Parag. [0081] and Fig. 4; The memory leak management system 106 (the stream processor) can utilize the monitored data in a variety of ways; the job monitor 422 (within the memory leak management system 106) can generate a usage report 442 including information about customer impact, such as a rollout correlation metric (e.g., correlation between rollout of various applications and instances of memory leaks (generating alerts). See Parag. [0082]; the job monitor 422 may provide a rollout stop signal to a health server 440 to indicate that a particular application rollout has a problem and that the rollout should be stopped to any and all host nodes (sending the generated alerts to the data sink)).
Claim 5. Chen in view of Jueping and Myla discloses [t]he method of claim 3,
Chen doesn’t explicitly disclose wherein the predetermined time is 5 minutes.
However, Jueping discloses wherein the predetermined time is 5 minutes (See Page 5, Lines 8-17 and 30-36; Define the time period T1 (predetermined time), and calculate the average memory usage and CPU usage in the previous T1 time period at the current time point; set the T1 time to 5 minutes …).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the identified time interval (predetermined interval), taught by Chen, as a 5 minutes predetermined time, as taught by Jueping. This would be convenient to determine if the average usage rate of the memory is greater than or equal to the third threshold to indicate if the current resource load of the controller is relatively large; thus, no new network device audit operations are added (Jueping, See Page 5, Lines 30-36).
Claim 6. Chen in view of Jueping and Myla discloses [t]he method of claim 5,
Chen further discloses wherein the predetermined process information period of time is approximately 5 minutes (See Parag. [0038-0040]; the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals (the predetermined process information period of time)).
Chen in view of Jueping doesn’t explicitly disclose the predetermined process information frequency is approximately 2 seconds.
However, Myla discloses the predetermined process information frequency is approximately 2 seconds (See Col. 4 lines 60-67 and Col. 5 lines 1-9; network device may determine a first time interval (e.g., every 2 seconds) (the predetermined process information frequency is approximately 2 seconds) to collect and send the delta values of the telemetry data to the collector device).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the memory leak management system (the stream processor), taught by Chen in view of Jueping, to receive process information repeatedly at a predetermined process information frequency of approximately 2 seconds, as taught by Myla. This would be convenient to maintain and/or improve a performance of the network device (Myla, See Col. 2 lines 24-40).
Claim 7. Chen discloses [a] non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising:
one or more instructions that, when executed by one or more processors of a device, cause one or more processors to (See Parag. [0089]; a non-transitory computer-readable medium can include instructions that, when executed by one or more processors, cause a computing device to perform the acts of for diagnosing and mitigating memory leaks on host node(s). See also Fig. 2):
provide one or more collectors which periodically request memory utilization from a device (See Parag. [0038-0040]; agents 304 (one or more collectors) may refer to different types of agents that locally generate or sample a state of memory usage on the host node 302 (device) ... the agents 304 may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302; the memory usage data 306 may include performance counters indicative of memory usage for specific processes ... the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals. See also Fig. 6. Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data. Therefore, the Examiner interpreted one or more collectors periodically request memory utilization as the agents (monitoring agent) obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents may capture a current status of memory allocation at five-minute intervals);
receive, by a stream processor, memory utilization from the one or more collectors (See Parag. [0038]; the agents 304 may refer to different types of agents that provide memory usage data 306 to the memory leak management system 106 (a stream processor) (e.g., aggregation system 202) for further analysis. See also Parag. [0015], Fig. 2, and Fig. 6);
monitor, by the stream processor, if memory utilization evaluated over a predetermined time crosses a predetermined threshold (See Parag. [0045]; the data aggregator 204 can provide aggregated data 308 to a time interval manager 206 (within the memory leak management system 106) to identify time intervals (predetermined time) for the memory usage data 306. See Parag. [0048-0049]; … the data aggregator 204 applies an algorithm to the aggregated memory usage data to determine a dynamic severity score corresponding to a current status of the memory usage data. Over time, as the severity score increases, the data aggregator 204 may determine that the aggregated memory usage data results in a severity score over a threshold value (crosses a predetermined threshold) ... the time interval manager 206 can provide an indication of the time interval(s) 310 to the data aggregator 204 ... the data aggregator 204 can provide a subset of aggregated data 308 limited to memory usage data 306 for the identified time interval 310 to the diagnosis and mitigation system 210. See also Parag. [0092]; aggregating the memory usage data over one or more predetermined intervals (predetermined time) to determine a subset of host nodes from the plurality of host nodes predicted to have memory leaks based on memory usage data for the subset of host nodes satisfying one or more impact metrics. The one or more impact metrics may include one or more of a threshold increase in memory usage over a relevant time interval, a threshold increase in memory usage over a short duration of time ... See also Parag. [0043] [0046-0047] [0049] [0068] [0085-0088], Fig. 2, Fig. 3A-B, Fig. 5A-C, and Fig. 6);
send data downstream to a data sink for persistence (See Parag. [0034]; The monitor and reporting system 216 (within the memory leak management system 106) may receive memory usage data and provide further analysis in connection with diagnosing and mitigating memory leaks on host nodes; the monitor and reporting system 216 can utilize third-party analysis systems to perform a more thorough analysis of diagnosis information and/or memory usage information to develop a detailed report including information associated with specific host nodes and/or processes that may be used in preventing or otherwise mitigating memory impact events across nodes of the cloud computing system. See Parag. [0035]; the memory leak management system 106 may include a data storage 218. The data storage 218 may include any information associated with respective host nodes and/or processes and services hosted by the respective host nodes. Examiner’s interpretation: The Examiner interpreted sending data downstream to a data sink for persistence as sending data downstream for a destination for storage);
send a new sampling strategy to the one or more collectors, if the memory utilization evaluated over the predetermined time crosses the predetermined threshold (See Parag. [0048-0054] and Fig. 3A-B; the data aggregator 204 provides the aggregated subset 312 (and the associated severity score) to the diagnosis and mitigation system 210 based on the severity score exceeding the threshold value … the diagnosis manager 212 can evaluate the aggregated subset 312 of memory usage data to determine a diagnosis command 314 including instructions for the host nodes associated with the aggregated subset 312 of memory usage data … the diagnosis command 314 may include an indication of a memory leak or other memory impact event as well as instructions indicating one or more diagnosis actions that the candidate node 322 can perform to enable the memory leak management system 106 to effectively monitor further memory usage data … In response to receiving the diagnosis command 314, agents 324 (the one or more collector) on the candidate node(s) 322 can collect additional memory usage data. In particular, each of multiple agents 324 can collect or sample different types of memory usage data; the agents 324 can sample memory usage data at predetermined intervals (e.g., every five minutes). See also Fig. 2. Examiner’s interpretation: The Examiner interpreted “sending a new sampling strategy to the one or more collector” as sending the diagnosis command to agent(s) to effectively monitor/collect additional memory usage data and perform new sampling using the additional memory usage data. In addition, the command is based on the severity score exceeding the threshold value over a time interval).
request, by the one or more collectors, process information repeatedly from the device for a predetermined process information period of time (See Parag. [0038-0040]; the agents 304 (one or more collectors) may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302 (device). For example, the memory usage data 306 may include performance counters indicative of memory usage for specific processes … the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals (a predetermined process information period of time). Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data).
Chen doesn’t explicitly disclose the evaluated memory utilization over a predetermined time is an average memory utilization; [and] receive, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Jueping discloses monitor, by the stream processor, if an average memory utilization evaluated over a predetermined time crosses a predetermined threshold (See Page 5, Lines 8-17 and 30-36; Get the CPU usage and memory usage of the controller; Define the time period T1 (predetermined time), and calculate the average memory usage and CPU usage in the previous T1 time period at the current time point; set the T1 time to 5 minutes … a third threshold (predetermined threshold) is set for the average usage rate of the memory, that is, within a certain period of time … when the average usage rate of the memory is greater than or equal to the third threshold, it indicates that the current resource load of the controller is relatively large).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the data aggregator determining if the aggregated memory usage data, over a predetermined interval, results in a severity score over a threshold value, taught by Chen, to monitor if an average memory utilization evaluated over a predetermined time crosses a predetermined threshold, as taught by Jueping. This would be convenient to indicate that the current resource load of the controller is relatively large; thus, no new network device audit operations are added (Jueping, See Page 5, Lines 30-36).
Chen in view of Jueping doesn’t explicitly disclose receive, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Myla discloses receive, by the stream processor, the process information repeatedly at a predetermined process information frequency from the one or more collectors (See Col. 4 lines 60-67 and Col. 5 lines 1-9; network device (the one or more collectors) may determine a first time interval (a predetermined process information frequency) (e.g., every 2 seconds, every 5 seconds, every 10 seconds, every 30 seconds, and/or the like) to collect and send the delta values of the telemetry data to the collector device (stream processor). See Col. 2 lines 24-40; The collector device analyzes the telemetry data to determine a status of the network device, a health of the network device, and/or the like ... Examiner’s interpretation: The Examiner interpreted the network device, taught by Myla, as a collector as it collects and sends the telemetry data including information of the one or more resources of the network device at a particular time to the collector device. In addition, the Examiner interpreted the collector device, taught by Myla, as a stream processor as it receives and analyzes the telemetry data to determine a status of the network device and a health of the network device).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the memory leak management system (the stream processor), taught by Chen in view of Jueping, to receive process information repeatedly at a predetermined process information frequency from the one or more collectors, as taught by Myla. This would be convenient to maintain and/or improve a performance of the network device (Myla, See Col. 2 lines 24-40).
Claim 9 is taught by Chen in view of Jueping and Myla as described for claim 3.
Claim 11 is taught by Chen in view of Jueping and Myla as described for claim 5.
Claim 12 is taught by Chen in view of Jueping and Myla as described for claim 6.
Claim 13. Chen discloses [a] system comprising:
one or more processors configured to (See Parag. [0089]; one or more processors):
provide one or more collectors which periodically request memory utilization from a device (See Parag. [0038-0040]; agents 304 (one or more collectors) may refer to different types of agents that locally generate or sample a state of memory usage on the host node 302 (device) ... the agents 304 may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302; the memory usage data 306 may include performance counters indicative of memory usage for specific processes ... the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals. See also Fig. 6. Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data. Therefore, the Examiner interpreted one or more collectors periodically request memory utilization as the agents (monitoring agent) obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents may capture a current status of memory allocation at five-minute intervals);
receive, by a stream processor, memory utilization from the one or more collectors (See Parag. [0038]; the agents 304 may refer to different types of agents that provide memory usage data 306 to the memory leak management system 106 (a stream processor) (e.g., aggregation system 202) for further analysis. See also Parag. [0015], Fig. 2, and Fig. 6);
monitor, by the stream processor, if memory utilization evaluated over a predetermined time crosses a predetermined threshold (See Parag. [0045]; the data aggregator 204 can provide aggregated data 308 to a time interval manager 206 (within the memory leak management system 106) to identify time intervals (predetermined time) for the memory usage data 306. See Parag. [0048-0049]; … the data aggregator 204 applies an algo algorithm to the aggregated memory usage data to determine a dynamic severity score corresponding to a current status of the memory usage data. Over time, as the severity score increases, the data aggregator 204 may determine that the aggregated memory usage data results in a severity score over a threshold value (crosses a predetermined threshold) ... the time interval manager 206 can provide an indication of the time interval(s) 310 to the data aggregator 204 ... the data aggregator 204 can provide a subset of aggregated data 308 limited to memory usage data 306 for the identified time interval 310 to the diagnosis and mitigation system 210. See also Parag. [0092]; aggregating the memory usage data over one or more predetermined intervals (predetermined time) to determine a subset of host nodes from the plurality of host nodes predicted to have memory leaks based on memory usage data for the subset of host nodes satisfying one or more impact metrics. The one or more impact metrics may include one or more of a threshold increase in memory usage over a relevant time interval, a threshold increase in memory usage over a short duration of time ... See also Parag. [0043] [0046-0047] [0049] [0068] [0085-0088], Fig. 2, Fig. 3A-B, Fig. 5A-C, and Fig. 6);
send data downstream to a data sink for persistence (See Parag. [0034]; The monitor and reporting system 216 (within the memory leak management system 106) may receive memory usage data and provide further analysis in connection with diagnosing and mitigating memory leaks on host nodes; the monitor and reporting system 216 can utilize third-party analysis systems to perform a more thorough analysis of diagnosis information and/or memory usage information to develop a detailed report including information associated with specific host nodes and/or processes that may be used in preventing or otherwise mitigating memory impact events across nodes of the cloud computing system. See Parag. [0035]; the memory leak management system 106 may include a data storage 218. The data storage 218 may include any information associated with respective host nodes and/or processes and services hosted by the respective host nodes. Examiner’s interpretation: The Examiner interpreted sending data downstream to a data sink for persistence as sending data downstream for a destination for storage);
send a new sampling strategy to the one or more collectors, if the memory utilization evaluated over the predetermined time crosses the predetermined threshold (See Parag. [0048-0054] and Fig. 3A-B; the data aggregator 204 provides the aggregated subset 312 (and the associated severity score) to the diagnosis and mitigation system 210 based on the severity score exceeding the threshold value … the diagnosis manager 212 can evaluate the aggregated subset 312 of memory usage data to determine a diagnosis command 314 including instructions for the host nodes associated with the aggregated subset 312 of memory usage data … the diagnosis command 314 may include an indication of a memory leak or other memory impact event as well as instructions indicating one or more diagnosis actions that the candidate node 322 can perform to enable the memory leak management system 106 to effectively monitor further memory usage data … In response to receiving the diagnosis command 314, agents 324 (the one or more collector) on the candidate node(s) 322 can collect additional memory usage data. In particular, each of multiple agents 324 can collect or sample different types of memory usage data; the agents 324 can sample memory usage data at predetermined intervals (e.g., every five minutes). See also Fig. 2. Examiner’s interpretation: The Examiner interpreted “sending a new sampling strategy to the one or more collector” as sending the diagnosis command to agent(s) to effectively monitor/collect additional memory usage data and perform new sampling using the additional memory usage data. In addition, the command is based on the severity score exceeding the threshold value over a time interval);
request, by the one or more collectors, the process information repeatedly from the device for a predetermined process information period of time (See Parag. [0038-0040]; the agents 304 (one or more collectors) may include a monitoring agent that collects process related performance counters associated with specific processes … The memory usage data 306 may refer to various types of information indicative of memory usage on the host nodes 302 (device). For example, the memory usage data 306 may include performance counters indicative of memory usage for specific processes … the agents 304 may obtain or capture snapshots of memory usage at points in time representative of allocation of memory blocks at the specific points in time; the agents 304 may capture a current status of memory allocation at five-minute intervals (a predetermined process information period of time). Examiner’s interpretation: In the context of network monitoring or application performance, obtaining snapshot data often involves making a request to retrieve that data).
Chen doesn’t explicitly disclose the evaluated memory utilization over a predetermined time is an average memory utilization; [and] receive, by the stream processor, process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Jueping discloses monitor, by the stream processor, if an average memory utilization evaluated over a predetermined time crosses a predetermined threshold (See Page 5, Lines 8-17 and 30-36; Get the CPU usage and memory usage of the controller; Define the time period T1 (predetermined time), and calculate the average memory usage and CPU usage in the previous T1 time period at the current time point; set the T1 time to 5 minutes … a third threshold (predetermined threshold) is set for the average usage rate of the memory, that is, within a certain period of time … when the average usage rate of the memory is greater than or equal to the third threshold, it indicates that the current resource load of the controller is relatively large).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the data aggregator determining if the aggregated memory usage data, over a predetermined interval, results in a severity score over a threshold value, taught by Chen, to monitor if an average memory utilization evaluated over a predetermined time crosses a predetermined threshold, as taught by Jueping. This would be convenient to indicate that the current resource load of the controller is relatively large; thus, no new network device audit operations are added (Jueping, See Page 5, Lines 30-36).
Chen in view of Jueping doesn’t explicitly disclose receive, by the stream processor, process information repeatedly at a predetermined process information frequency from the one or more collectors.
However, Myla discloses receive, by the stream processor, process information repeatedly at a predetermined process information frequency from the one or more collectors (See Col. 4 lines 60-67 and Col. 5 lines 1-9; network device (the one or more collectors) may determine a first time interval (a predetermined process information frequency) (e.g., every 2 seconds, every 5 seconds, every 10 seconds, every 30 seconds, and/or the like) to collect and send the delta values of the telemetry data to the collector device (stream processor). See Col. 2 lines 24-40; The collector device analyzes the telemetry data to determine a status of the network device, a health of the network device, and/or the like ... Examiner’s interpretation: The Examiner interpreted the network device, taught by Myla, as a collector as it collects and sends the telemetry data including information of the one or more resources of the network device at a particular time to the collector device. In addition, the Examiner interpreted the collector device, taught by Myla, as a stream processor as it receives and analyzes the telemetry data to determine a status of the network device and a health of the network device).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the memory leak management system (the stream processor), taught by Chen in view of Jueping, to receive process information repeatedly at a predetermined process information frequency from the one or more collectors, as taught by Myla. This would be convenient to maintain and/or improve a performance of the network device (Myla, See Col. 2 lines 24-40).
Claim 15 is taught by Chen in view of Jueping and Myla as described for claim 3.
Claim 17 is taught by Chen in view of Jueping and Myla as described for claim 5.
Claim 18 is taught by Chen in view of Jueping and Myla as described for claim 6.
Claims 4, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Pub. No. US 2021/0334186), hereinafter Chen; in view of Jueping (Pub. CN115037621A, published on 09/09/2022); further in view of Myla et al. (Patent No. US 11,405,261), hereinafter Myla; and further in view of Mazzaferri et al. (Pub. No. US 2007/0174429), hereinafter Mazzaferri.
Claim 4. Chen in view of Jueping and Myla discloses [t]he method of claim 3,
Chen discloses the method further comprising sending the memory utilization to the stream processor (See Parag. [0038]; the agents 304 may refer to different types of agents that provide memory usage data 306 to the memory leak management system 106 (the stream processor) (e.g., aggregation system 202) for further analysis).
The combination doesn’t explicitly disclose periodically requesting memory utilization from the device, by the one or more collectors, approximately every 30 seconds.
However, Mazzaferri discloses periodically requesting memory utilization from the device, by the one or more collectors, approximately every 30 seconds (See Parag. [0321]; operational meters are used by a LMS (load management subsystem) to measure server performance at predetermined intervals, which may be configured by an administrator. A LMS on each remote machine 30 in the machine farm 38 evaluates various performance metrics for the remote machine 30 for each predetermined period of time. For example, every thirty seconds, an evaluation of server load may include a query (periodically requesting) to operational meters for server's CPU utilization and memory utilization. See also Parag. [1179]).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the monitoring agent (the one or more collectors) that collects process related performance counters associated with specific processes, taught by the combination, to periodically requesting memory utilization from the device approximately every 30 seconds, as taught by Mazzaferri. This would be convenient for providing a load management capability and manage overall server and network load to minimize response time to client requests (Mazzaferri, Parag. [0316]).
Claim 10 is taught by Chen in view of Jueping, Myla, and Mazzaferri as described for claim 4.
Claim 16 is taught by Chen in view of Jueping, Myla, and Mazzaferri as described for claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Asadi et al. (Pub. No. US 2018/0121529) – Related art in the area of scaling for elastic query service system, (Parag. [0055]; metrics collector 405 may monitor resource usage (e.g., memory usage, central processing unit (CPU) usage, etc.) of master DB instance 160 and slave DB instances 165a-k. In some embodiments, metric collector 405 polls master DB instance 160 and slave DB instances 165a-k for their resource usage at defined intervals (e.g., once per second, once per thirty seconds, once per minute, once per five minutes, etc.) and stores the received resource usage information in metrics storage 420).
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELBASST TALIOUA whose telephone number is (571)272-4061. The examiner can normally be reached on Monday-Thursday 7:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Oscar Louie can be reached on 571-270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Abdelbasst Talioua/Examiner, Art Unit 2445