Prosecution Insights
Last updated: April 19, 2026
Application No. 18/942,176

SYSTEM AND METHOD FOR DYNAMIC MONITORING

Non-Final OA §102§103
Filed
Nov 08, 2024
Examiner
DAILEY, THOMAS J
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
95%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
694 granted / 859 resolved
+22.8% vs TC avg
Moderate +15% lift
Without
With
+14.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 859 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are pending. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/8/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1,2, 11, and 12 rejected under 35 U.S.C. 102(a)(1)(2) as being anticipated by Tuffs (US Pub. No. 2023/0145807). As to claim 1, Tuffs discloses a dynamic monitoring system comprising: a memory configured to load dynamic monitoring program; one or more processors configured to execute the dynamic monitoring program; and a network interface configured to receive metric data from a monitoring target resource, wherein the dynamic monitoring program is configured to, when executed by the one or more processors (Fig. 1 and Abstract), cause the dynamic monitoring system to perform operations comprising: obtaining a first representative value and a first standard deviation of the metric data measured at a first plurality of measurement times belonging to a first time window (Fig. 1 and [0037], particularly, “At block 401, a cloud monitoring system generates statistics for metric data within a current accumulation time window. Based on detecting the emit trigger, the cloud monitoring system generates statistical indicators of distribution or dispersion of the accumulated metric data. Examples of the statistical indicators include mean and standard deviation.”); obtaining a second representative value and a second standard deviation of the metric data measured at a second plurality of measurement times belonging to a second time window subsequent to the first time window (Fig. 1 and [0037], particularly, “At block 401, a cloud monitoring system generates statistics for metric data within a current accumulation time window. Based on detecting the emit trigger, the cloud monitoring system generates statistical indicators of distribution or dispersion of the accumulated metric data. Examples of the statistical indicators include mean and standard deviation.” the system continually loops, see [0046], particularly, “Asynchronously, operational flow continues to block 401 to obtain a next set of tagged metric data.”); increasing or decreasing feedback for the monitoring target resource, using at least one of (i) a first comparison result between the first representative value and the second representative value or (ii) a second comparison result between the first standard deviation and the second standard deviation ([0033], particularly, “The sustain filter 305 sustains metrics to track when outliers for the metrics are detected. Single time-point outliers of a metric are converted/sustained into multi time-point values of the metric for possible use with other filters in determining significance. Cloud metrics previously having outlier values can be of interest for future outlier behavior and can be sustained using the sustain filter 305…In this embodiment, all cloud metrics filtered as outliers by the feedback control loop at stages B1-BN have their sustain value increase to the baseline (e.g., 5 iterations) so as to track behavior for the corresponding cloud metrics over time.”); and adjusting a monitoring level for the monitoring target resource based on the feedback ([0033], particularly, “In this embodiment, all cloud metrics filtered as outliers by the feedback control loop at stages B1-BN have their sustain value increase to the baseline (e.g., 5 iterations) so as to track behavior for the corresponding cloud metrics over time.”). As to claim 11, it is rejected by a similar rationale by that set forth in claim 1’s rejection. As to claims 2 and 12, Tuffs discloses a number of measurement times of the first plurality of measurement times belonging to the first time window is greater than or equal to a number of measurement times of the second plurality of measurement times belonging to the second time window (Fig. 1 and [0037], particularly, “At block 401, a cloud monitoring system generates statistics for metric data within a current accumulation time window. Based on detecting the emit trigger, the cloud monitoring system generates statistical indicators of distribution or dispersion of the accumulated metric data. Examples of the statistical indicators include mean and standard deviation.” the system continually loops, see [0046], particularly, “Asynchronously, operational flow continues to block 401 to obtain a next set of tagged metric data.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3-10 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tuffs in view of Iyengar (US Pub. No. 2020/0112513). As to claims 3 and 13, Tuffs discloses the parent claim but does not disclose a number of measurement times of the first plurality of measurement times belonging to the first time window is greater than a number of measurement times of the second plurality of measurement times belonging to the second time window. However, Iyengar discloses a number of measurement times of the first plurality of measurement times belonging to a first time window is greater than a number of measurement times of a second plurality of measurement times belonging to a second time window ([0039], “The IMS can reduce monitoring overhead by reducing a frequency of monitoring, and can increase monitoring accuracy by increasing a frequency of monitoring. In addition, as mentioned earlier, the IMS can offer multiple monitoring methods, wherein more accurate monitoring methods consume more overhead. The IMS can balance accuracy and performance by choosing a monitoring method based on its accuracy level and overhead consumed,” and [0046], “In step 106, the frequency of the monitoring and the frequency of the storing are checked against a trigger condition to change the frequency of the monitoring and the frequency of the storing. An administrator can define a trigger condition(s), which would indicate corrective action, needs to be taken. For example, if a time taken by a storage system is too high, a trigger condition can be set to change the time (e.g., change the frequency). When a trigger condition occurs, a user-defined method is invoked to improve the situation. For example, in order to reduce time taken by storage system, increase amount of caching.” See, for example, [0034] for “monitoring intervals” i.e. time windows) Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Tuffs and Iyengar in order to allow the system to more efficiently use limited resources. As to claims 4 and 14, the teachings of Tuffs and Iyengar as combined for the same reasons set forth in claim 3’s rejection further disclose based on an outlier occurrence frequency of the metric data for the monitoring target resource being less than a reference value, the number of measurement times of the first plurality of measurement times belonging to the first time window is greater than the number of measurement times of the second plurality of measurement times belonging to the second time window (Tuffs, [0020], particularly, “The cloud metric aggregator 151 can average or otherwise compute statistics (such as min/mean/max/standard-deviation) for metric values across time windows. For instance, the cloud metric aggregator 151 can average CPU utilization over 15-minute periods. The granularity of the time windows can depend on available computing resources to filter and store metric values and the desired level of outlier detection. For some metrics that are known to have low variability over short time windows, the cloud metric aggregator 151 can extend the time windows.” And Iyengar, [0046]). As to claims 5 and 15, the teachings of Tuffs and Iyengar as combined for the same reasons set forth in claim 3’s rejection further disclose in a predetermined first time band, the number of measurement times of the first plurality of measurement times belonging to the first time window is greater than the number of measurement times of the second plurality of measurement times belonging to the second time window (Iyengar, [0039] and [0046]). As to claims 6 and 16, the teachings of Tuffs and Iyengar as combined for the same reasons set forth in claim 3’s rejection further disclose the monitoring target resource includes a cloud compute instance, and wherein obtaining the second representative value and the second standard deviation of metric measured at the second plurality of measurement times belonging to the second time window includes: performing (i) a time window setting in which the number of measurement times of the first plurality of measurement times belonging to the first time window is greater than the number of measurement times of the second plurality of measurement times belonging to the second time window, or (ii) a time window setting in which the number of measurement times of the first plurality of measurement times belonging to the first time window is the same as the number of measurement times of the second plurality of measurement times belonging to the second time window, using tag information of the cloud compute instance (Tuffs, Fig. 1 and [0037], particularly, “At block 401, a cloud monitoring system generates statistics for metric data within a current accumulation time window. Based on detecting the emit trigger, the cloud monitoring system generates statistical indicators of distribution or dispersion of the accumulated metric data. Examples of the statistical indicators include mean and standard deviation.” the system continually loops, see [0046], particularly, “Asynchronously, operational flow continues to block 401 to obtain a next set of tagged metric data.” And Iyengar, [0039] and [0046]). As to claims 7 and 17, Tuffs discloses the parent claim but does not disclose increasing or decreasing feedback for the monitoring target resource includes: setting a value of the feedback for the monitoring target resource to a maximum value regardless of the value of a current feedback, based on a first condition being satisfied; increasing the value of the feedback for the monitoring target resource by a predetermined value based on the value of the current feedback, based on a second condition being satisfied; and decreasing the value of the feedback for the monitoring target resource by a predetermined value based on the value of the current feedback, based on a third condition being satisfied. However, Iyengar discloses increasing or decreasing feedback for the monitoring target resource includes: setting a value of the feedback for the monitoring target resource to a maximum value regardless of the value of a current feedback, based on a first condition being satisfied; increasing the value of the feedback for the monitoring target resource by a predetermined value based on the value of the current feedback, based on a second condition being satisfied; and decreasing the value of the feedback for the monitoring target resource by a predetermined value based on the value of the current feedback, based on a third condition being satisfied ([0039], “The IMS can reduce monitoring overhead by reducing a frequency of monitoring, and can increase monitoring accuracy by increasing a frequency of monitoring. In addition, as mentioned earlier, the IMS can offer multiple monitoring methods, wherein more accurate monitoring methods consume more overhead. The IMS can balance accuracy and performance by choosing a monitoring method based on its accuracy level and overhead consumed,” and [0046], “In step 106, the frequency of the monitoring and the frequency of the storing are checked against a trigger condition to change the frequency of the monitoring and the frequency of the storing. An administrator can define a trigger condition(s), which would indicate corrective action, needs to be taken. For example, if a time taken by a storage system is too high, a trigger condition can be set to change the time (e.g., change the frequency). When a trigger condition occurs, a user-defined method is invoked to improve the situation. For example, in order to reduce time taken by storage system, increase amount of caching). Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Tuffs and Iyengar in order to allow the system to more efficiently use limited resources. As to claims 8 and 18, Tuffs discloses the parent claim but does not disclose monitoring target resource includes a plurality of different virtual machine instances provisioned on one physical server, and wherein increasing or decreasing feedback for the monitoring target resource includes: setting a value of the feedback for a first virtual machine instance to a maximum value regardless of the value of a current feedback, based on a first condition is satisfied; based on a second condition is satisfied, increasing the value of the feedback for the first virtual machine instance by a predetermined value based on the value of the current feedback; based on a third condition is satisfied, decreasing the value of the feedback for the first virtual machine instance by a predetermined value based on the value of the current feedback; and performing one of (i) the setting of the value of the feedback, (ii) the increasing of the value of the feedback, and (iii) the decreasing of the value of the feedback for each of a plurality of different virtual machine instances except the first virtual machine instance. However, Iyengar discloses monitoring target resource includes a plurality of different virtual machine instances provisioned on one physical server, and wherein increasing or decreasing feedback for the monitoring target resource includes: setting a value of the feedback for a first virtual machine instance to a maximum value regardless of the value of a current feedback, based on a first condition is satisfied; based on a second condition is satisfied, increasing the value of the feedback for the first virtual machine instance by a predetermined value based on the value of the current feedback; based on a third condition is satisfied, decreasing the value of the feedback for the first virtual machine instance by a predetermined value based on the value of the current feedback; and performing one of (i) the setting of the value of the feedback, (ii) the increasing of the value of the feedback, and (iii) the decreasing of the value of the feedback for each of a plurality of different virtual machine instances except the first virtual machine instance ([0039], “The IMS can reduce monitoring overhead by reducing a frequency of monitoring, and can increase monitoring accuracy by increasing a frequency of monitoring. In addition, as mentioned earlier, the IMS can offer multiple monitoring methods, wherein more accurate monitoring methods consume more overhead. The IMS can balance accuracy and performance by choosing a monitoring method based on its accuracy level and overhead consumed,” and [0046], “In step 106, the frequency of the monitoring and the frequency of the storing are checked against a trigger condition to change the frequency of the monitoring and the frequency of the storing. An administrator can define a trigger condition(s), which would indicate corrective action, needs to be taken. For example, if a time taken by a storage system is too high, a trigger condition can be set to change the time (e.g., change the frequency). When a trigger condition occurs, a user-defined method is invoked to improve the situation. For example, in order to reduce time taken by storage system, increase amount of caching.” See [0050], discussing virtual machines, etc.) Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Tuffs and Iyengar in order to allow the system to more efficiently use limited resources. As to claims 9 and 19, the teachings of Tuffs and Iyengar as combined for the same reasons set forth in claim 8’s rejection further disclose adjusting the monitoring level for the monitoring target resource includes: adjusting the monitoring level of the monitoring target resource downward by one level, based on the value of the feedback is the minimum value; and adjusting the monitoring level of the monitoring target resource upward by one level based on the value of the feedback being the maximum value (Iyengar, [0039] and [0046]). As to claims 10 and 20, the teachings of Tuffs and Iyengar as combined for the same reasons set forth in claim 8’s rejection further disclose the first condition is a condition including a first comparison result between the first representative value and the second representative value, and a second comparison result between the first standard deviation and the second standard deviation, and wherein the second condition and the third condition are conditions including a third comparison result between the first standard deviation and the second standard deviation (Iyengar, [0039] and [0046]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J DAILEY whose telephone number is (571)270-1246. The examiner can normally be reached 9:30am-6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J DAILEY/ Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Nov 08, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103
Apr 03, 2026
Interview Requested
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597054
METHOD AND SYSTEM OF FORWARDING CONTACT DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12580953
METHOD AND SYSTEM FOR DETECTING ENCRYPTED FLOOD ATTACKS
2y 5m to grant Granted Mar 17, 2026
Patent 12556589
MEDIA RESOURCE OPTIMIZATION
2y 5m to grant Granted Feb 17, 2026
Patent 12556605
Live Migration Of Clusters In Containerized Environments
2y 5m to grant Granted Feb 17, 2026
Patent 12549399
PROGRESS STATUS AFTER INTERRUPTION OF ONLINE SERVICE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
95%
With Interview (+14.6%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 859 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month