Prosecution Insights
Last updated: April 19, 2026
Application No. 17/554,107

AUTOMATED SYSTEM AND METHOD FOR DETECTION AND REMEDIATION OF ANOMALIES IN ROBOTIC PROCESS AUTOMATION ENVIRONMENT

Final Rejection §103
Filed
Dec 17, 2021
Examiner
UNG, LANNY N
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Infosys Limited
OA Round
4 (Final)
71%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
351 granted / 495 resolved
+15.9% vs TC avg
Strong +25% interview lift
Without
With
+25.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
19.8%
-20.2% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 495 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to amendments filed on November 14, 2025. Claims 1-3, 6-8, 11-13 are pending. Claims 2-3, 7-8 and 12-13 have been amended. Response to Amendment Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 6, 8, 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kothandaraman et al. (US 12,085,901) in view of Liao et al. (US 2021/0295203) and in further view of Mahamuni et al. (US 2023/0018199). With respect to Claim 1, Kothandaraman et al. disclose: discovering, by a processor, one or more resources in an RPA platform, (the dashboard screen 350 provides (discovers) real-time bot monitoring about, for example, a bot's (one or more resources) health, activity, current configuration(s), scheduling, logging, and alerts. The dashboard screen 350 also provides information regarding the infrastructure used to support the RPA system (RPA platform) at the hardware, service, and application levels as well as information provided by incident management tools, Column 10, lines 29-45) wherein the one or more resources comprise one or more RPA components and one or more dependent components corresponding to the one or more RPA components associated with the RPA platform, (a software bot (RPA component) deployed within the RPA system to perform an assigned task, Column 2, lines 17-18; CPU, Memory, Disk, etc. of an infrastructure (one or more dependent components) assigned to execute the software bot (RPA component), Columns 2 and 5, lines 41-43 and 56-63 respectively) wherein the one or more RPA components comprise one or more of: a control tower, a bot runner, and a bot, (a software bot deployed within the RPA system to perform an assigned task, Column 2, lines 17-18) and wherein the discovering comprises: loading, by the processor, attributes of the one or more RPA components and the one or more dependent components by connecting to the RPA platform, (information (attributes) received from the bots are stored (loaded) into a common data model repository which includes alerts or errors raised by a bot; the state or status of each bot and related processes; (one or more RPA components) transaction details, such as utilization and response times; general bot information, such as names, functional details, configuration details and information (attributes) related to the infrastructure supporting the bots (dependent components) and processes in the RPA system 111, such as the health of a machine, processing usage, disk space usage and availability, network traffic, handling time, productivity, exceptions raised, as well as an administrator(s) to contact regarding an identified issue(s), Column 6, lines 18-30) and querying, by the processor, the one or more RPA components through RPA-provided Application Program Interfaces (APIs) or a database for discovering the one or more RPA components onboarded or installed in the RPA environment; (establishing communication with a RPA system (RPA environment/RPA components) through database access provided by the RPA system wherein an ingest module accesses and retrieves data (discovering the one or more RPA components onboarded or installed in the RPA environment) through stored procedure calls with appropriate polling mechanisms (querying). The connection may be established through API calls to an API provided by RPA system (RPA provided APIs), Column 8, lines 1-17) receiving, by the processor, metrics data from a metrics data store (receiving data which can be in the form of unstructured incident description (metrics data from a metrics data store), Columns 8 and 9, lines 64-67 and 1-6 respectively) and historic log data from a log data store, (receiving historical events of the system, Column 7, lines 24-26) wherein the metrics data comprises one or more observation metrics of one or more resources in the RPA platform; (retrieved from source RPA system 210 may include alerts or errors raised by a bot; the state or status of each bot and related processes; transaction details, such as utilization and response times; general bot information, such as names, functional details, configuration details; and so forth. Infrastructure information, such as the health of a particular machine, processing usage, disk space usage and availability, network traffic, handling time, productivity, and/or exceptions raised may also be included in the received information (metrics data/one or more observation metrics), Column 8, lines 32-41) converting, by the processor, the metrics data into a structured format data; (transform (convert) the received data which can be in the form of an unstructured incident description (metrics data) to determine categorization and assignment of tickets (structured format data), Columns 8 and 9, lines 64-67 and 1-6 respectively) extracting, by the processor, error patterns from the historic log data; (historical events (error patterns) are used to train AI models to help predict the category of incidents (error patterns), Column 7, lines 22-28; previously determined responses (error patterns from historic log data) and categorizations of incidents raised within the RPA system, Column 9, lines 9-15) training, by the processor, one or more machine learning models based on the extracted error patterns and the converted metrics data; (The AI models employed within transform module 224 may be trained through a series of machine learning techniques applied to an algorithm using, for example, previously determined responses extracted error patterns) and categorizations of incidents (converted metrics data) raised within the RPA system 210 and/or training data provided by system administrators, which may include human agent observation, Columns 8 and 9, lines 64-67 and 1-20 respectively) computing, by the processor, a threshold value corresponding to each of the one or more observation metrics; (users are able to add metric SLAs (threshold values to each of the one or more observation metrics) such as whether a process is completed within a defined number of seconds (observation metric), Columns 9 and 10, lines 61-67 and 1-11 respectively) monitoring, by the processor, the discovered one or more resources in the RPA platform, (Each of the deployed bots 112 may generate execution logs and provide status updates as to the bot's health (e.g., state information, execution times, errors, and so forth). (monitoring), Columns 2-3, lines 62-67 and 1-3 respectively; The example RPA system 111 includes the infrastructure system 114. In some implementations, the infrastructure monitoring system 114 includes monitoring tools, which monitor the information technology infrastructure used to support the RPA system 111 at the hardware, service, and application levels, Column 5, lines 4-14) the monitoring comprising: determining, by the processor, values of the one or more observation metrics from the one or more resources in the platform; (retrieved from source RPA system 210 may include alerts or errors raised by a bot; the state or status of each bot and related processes; transaction details, such as utilization and response times; general bot information, such as names, functional details, configuration details; and so forth. Infrastructure information, such as the health of a particular machine, processing usage, disk space usage and availability, network traffic, handling time, productivity, and/or exceptions raised may also be included in the received information (values of one or more observation metrics), Column 8, lines 32-41) and detecting, by the processor, at least one anomaly by validating the values of the one or more observation metrics based on the threshold value corresponding to each of the one or more observation metrics; (automatically raise an incident when any anomalies are detected or approaching a threshold (threshold value) within RPA system at the infrastructure level, example of which may include CPU utilization, screen resolution changes, exceptions raised and which specific processes raise them, and so forth., (one or more observation metrics) Column 7, lines 10-16) and remediating, by the processor, the detected at least one anomaly, (a resolution for an anomaly is determined and executed, Abstract, lines 11-14) the remediating comprising: identifying, by the processor, at least one automated remediation action comprising a sequence of instructions; (determines (408) a resolution based on the AI model for at least one of the incidents based on the respective determined categorization and assignment of the at least one incident, Column 11, lines 20-25) and executing, by the processor, the identified at least one automated remediation action causing the remediation of the detected at least one anomaly. (The adaptor implements (410) the resolution, Column 11, lines 25-26) Kothandaraman et al. do not disclose: receiving, by the processor, historic unstructured log data from a log data store, converting, by the processor, the historic unstructured log data into a structured format; computing, by the processor, a dynamic threshold value corresponding to each of the one or more observation metrics using the trained one or more machine learning models; detecting, by the processor, at least one anomaly validating the values of the one or more observation metrics based on the dynamic threshold value corresponding to each of the one or more observation metrics; However, Liao et al. disclose: receiving, by the processor, historic unstructured log data from a log data store; (receiving unstructured historical data, Paragraph 54) converting, by the processor, the historic unstructured log data into a structured format; (converting the unstructured historical data into computer-readable structured data, Paragraph 54) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Liao et al. into the teaching of Kothandaraman et al. to include receiving, by the processor, historic unstructured log data from a log data store and converting, by the processor, the historic unstructured log data into a structured format in order to help with model training. (Liao et al., Paragraph 54) Kothandaraman et al. and Liao et al. do not disclose: computing, by the processor, a dynamic threshold value corresponding to each of the one or more observation metrics using the trained one or more machine learning models; detecting, by the processor, at least one anomaly validating the values of the one or more observation metrics based on the dynamic threshold value corresponding to each of the one or more observation metrics; However, Mahamuni et al. disclose: computing, by the processor, a dynamic threshold value corresponding to each of the one or more observation metrics using the trained one or more machine learning models; (calculating dynamic threshold by using the anomaly detection algorithm and continuously trained using the historic data values collected (trained one or more machine learning models), Paragraph 99) detecting, by the processor, at least one anomaly validating the values of the one or more observation metrics based on the dynamic threshold value corresponding to each of the one or more observation metrics; (using dynamic thresholds that represent bounds of an expected data range for particular datapoints being measured during anomaly detection (validating the values of the one or more observation metrics), Paragraph 99) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Mahamuni et al. into the teaching of Kothandaraman et al. and Liao et al. to include computing, by the processor, a dynamic threshold value corresponding to each of the one or more observation metrics using the trained one or more machine learning models and detecting, by the processor, at least one anomaly validating the values of the one or more observation metrics based on the dynamic threshold value corresponding to each of the one or more observation metrics in order to reduce false positive identification of anomalies. (Mahamuni et al., Paragraph 99, lines 5-7) With respect to Claim 3, all the limitations of Claim 1 have been addressed above; and Kothandaraman et al. and Liao et al. further disclose: wherein detecting the at least one anomaly by validating the values of the one or more observation metrics further comprises: parsing, by the processor, a metric message to extract values of the one or more observation metrics from structured data within the metric message; (Kothandaraman et al., The information from bots (metric message) is received and processed (parsed) by adapters 122 and stored together in the common data model repository, Column 5, lines 34-42; an adaptor 122 extracts set data from a source bot 112 or a process executing the bot 112, verifies and validates the information, (parsing a metric message) transform to the information according to a common data model (extra values), and persists the information in the common data model repository, Column 5, lines 44-48; the received information (metric message) may include incident descriptions structured or formatted (structured data within the metric message) according to the specifications of the RPA product employed to build the RPA system, Column 11, lines 13-16) comparing, by the processor, the values of the one or more observation metrics against a threshold value corresponding to each of the one or more observation metrics; (Kothandaraman et al., comparing a measured metric to a threshold level to determine if the threshold has been crossed, Column 5, lines 53-61; example measured metrics can include average handling time, productivity, volume, general bot status, efficiency, and utilization, (observation metrics) Column 5, lines 61-65) and determining, by the processor, the values of the one or more observation metrics as an anomaly when the values of the one or more observation metrics breach the threshold value. (Kothandaraman et al., Such analytics-based insights may include alerting the administrator 150 in case any business SLA is about to breach (i.e., reach a threshold), whether any of the bots 112 are overloaded or underutilized, whether any processes, such as the bots 112, productivity is low. The unified view 140 further provides administrator/bot controller 150 with screens to automatically raise an incident when any anomalies are detected or approaching a threshold within RPA system 111 at the infrastructure level, example of which may include CPU utilization, screen resolution changes, exceptions raised and which specific processes raise them, and so forth., Column 7, lines 5-14) Kothandaraman et al. and Liao et al. do not disclose: comparing, by the processor, the values of the one or more observation metrics against a dynamic threshold value corresponding to each of the one or more observation metrics; (anomaly detection may implement dynamic thresholding for detecting anomalies at the process level, rather than using static thresholds for each batch job being executed. Dynamic thresholds can represent bounds of an expected data range for particular datapoints being measured during anomaly detection., Paragraph 99) and determining, by the processor, the values of the one or more observation metrics as an anomaly when the values of the one or more observation metrics breach the dynamic threshold value. (anomaly detection may implement dynamic thresholding for detecting anomalies at the process level, rather than using static thresholds for each batch job being executed. Dynamic thresholds can represent bounds of an expected data range for particular datapoints being measured during anomaly detection., Paragraph 99) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Mahamuni et al. into the teaching of Kothandaraman et al. and Liao et al. to include comparing, by the processor, the values of the one or more observation metrics against a dynamic threshold value corresponding to each of the one or more observation metrics and determining, by the processor, the values of the one or more observation metrics as an anomaly when the values of the one or more observation metrics breach the dynamic threshold value in order to reduce false positive identification of anomalies. (Mahamuni et al., Paragraph 99, lines 5-7) Claims 6 and 8 are system claims corresponding to the method claims above (Claims 1 and 3) and, therefore, are rejected for the same reasons set forth in the rejections of Claims 1 and 3. Claims 11 and 13 are non-transitory computer readable medium claims corresponding to the method claims above (Claims 1 and 3) and, therefore, are rejected for the same reasons set forth in the rejections of Claims 1 and 3. Claims 2, 7 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kothandaraman et al. (US 12,085,901) in view of Liao et al. (US 2021/0295203) in view of Mahamuni et al. (US 2023/0018199) and in further view of Uriel (US 2016/0285783). With respect to Claim 2, all the limitations of Claim 1 have been addressed above; and Kothandaraman et al., Liao et al. and Mahamuni et al. further disclose: wherein determining the values of the one or more observation metrics further comprises: querying, by the processor, the one or more observation metrics of the one or more resources from a database (Kothandaraman et al., the ingest module 222 accesses and retrieves data (observation metrics stored in a database) through, for example, stored procedure calls with appropriate polling mechanisms (querying), Column 8, lines 9-13) executing, by the processor, at least one [instruction] to fetch the values of the one or more observation metrics and one or more current metric values from the one or more resources; (Kothandaraman et al., data retrieval (values of the one or more observation metrics and one or more current metric values) may be scheduled as a batch job (executing) which may be configured to execute every N number of hours or in a more near-real time configuration such as every 5 to 10 minutes, Column 8, lines 26-31) and generating, by the processor, a metric message comprising the values of the one or more observation metrics. (Kothandaraman et al., receiving information (metric message) regarding the processes executing the various software bots deployed within the RPA system (values of the one or more observation metrics), Column 8, lines 18-21) Kothandaraman et al., Liao et al. and Mahamuni et al. do not disclose: querying, by the processor, at least one script associated with each of the one or more observation metrics from a script repository; executing, by the processor, the at least one script to fetch the values of the one or more observation metrics and one or more current metric values from the one or more resources However, Uriel discloses: querying, by the processor, at least one script associated with each of the one or more observation metrics from a script repository; (a list of monitoring operations, which can include executing a script (at least one script), are stored in a resource allocation database (script repository), Paragraphs 28-30) executing, by the processor, the at least one script to fetch the values of the one or more observation metrics and one or more current metric values from the one or more resources (execute monitoring operations (at least one script) at the compute nodes (CN) (resource) to monitor resource utilization (e.g. to identify a level of processor utilization, to identify a level of memory utilization, to identify a network latency of a CN, etc.) (values of the one or more observation metrics and one or more current metric values), Paragraph 22; perform a monitoring operation can include executing a command, executing a script, etc., Paragraph 28) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Uriel into the teaching of Kothandaraman et al., Liao et al. and Mahamuni et al. to include querying, by the processor, at least one script associated with each of the one or more observation metrics from a script repository and executing, by the processor, the at least one script to fetch the values of the one or more observation metrics and one or more current metric values from the one or more resources in order to enable an administrator/user to obtain and evaluate the results of the monitoring operations at a time of their choosing. (Uriel, Paragraph 33) Claim 7 is a system claim corresponding to the method claim above (Claim 2) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 2. Claim 12 is a non-transitory computer readable medium claim corresponding to the method claim above (Claim 2) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 2. Response to Arguments Applicant's arguments filed November 14, 2025 have been fully considered but they are not persuasive. In the Remarks, Applicant argues: Independent claim 1 recites "extracting, by the processor, error patterns from the converted historic unstructured log data" and "training, by the processor, one or more machine learning models based on the extracted error patterns and the converted metrics data." The Examiner first alleges, with respect to the first of these quotations, that Kothandaraman’s “historical events” and “category of incidents” both correspond to the claimed “error patterns.” Office Action at page. 5. Then, the Examiner alleges, with respect to the second of these quotations, Kothandaraman’s "previously determined responses and categorizations of incidents" teach both "extracted error patterns" and "converted metrics data." Id. Given that the Office Action refers to Kothandaraman's incident categorizations twice as corresponding to the “error patterns,” the most logical reading of the rejection appears to be it asserts that Kothandaraman’s incident categorizations correspond to the claimed “error patterns,” and its “previously determined responses” correspond to the claimed “converted metrics data.” Kothandaraman, however, nowhere indicates or suggests that its mere “responses” correspond to “metrics data,” let alone “converted metrics data,” as recited by claim 1 (emphasis added). Liao and Mahamuni are cited against other subject matter recited by claim 1, and do not appear to cure this deficiency of Kothandaraman. Accordingly, claim 1 is distinct from the cited references for this first reason. Examiner’s Response: The Examiner respectfully disagrees. The Examiner would first like to note that the citations in claim 1 for the “extracting” and “training” steps have been updated for clarification purposes to better show which elements in Kothandaraman disclose the Applicant’s “extracted error patterns” and “converted metrics data”. Specifically, Kothandaraman discloses transforming received data in the form of unstructured incident descriptions to determine categorization and assignment of tickets (Columns 8-9, lines 64-67 and 1-6 respectively). The categorization of the incidents can be reasonably interpreted as the Applicant’s “converted metrics data”. Further, Kothandaraman discloses obtaining (extract) “previously determined responses” for the purpose of training a machine learning model (Column 9, lines 9-15). The obtaining of “previously determined responses” can be reasonably interpreted as the Applicant’s “extracted error patterns” as the responses are based on previous/historic event data (historic log data). As for the “training” step, Kothandaraman discloses that AI models are “trained through a series of machine learning techniques applied to an algorithm using, for example, previously determined responses and categorizations of incidents raised within the RPA system and/or training data provided by system administrators, which may include human agent observation (see Columns 8 and 9, lines 64-67 and 1-20 respectively). As can be seen in the clarification citations to claim 1 above, the Examiner has interpreted Kothandaraman’s disclosure of “previously determined responses” as the Applicant’s “extracted error patterns” and Kothandaraman’s disclosure of “categorization of incidents” as the Applicant’s “converted metrics data”. Therefore, for at least the reasons set forth above, the rejection made under 35 U.S.C. §103 is proper and thus, maintained. In the Remarks, Applicant argues: Additionally, the Office Action fails to demonstrate how Kothandaraman (or even any of the other cited references) disclose or suggest “extracting, by the processor, error patterns from the converted historic unstructured log data.” The Office Action alleges that this subject matter is taught by Kothandaraman, pointing to its disclosure of “by employing the above-mentioned machine learning capabilities trained based on the historical events of the system, the example system 100 is able to predict the category of incidents and to automatically allocate the appropriate user groups for each incident” (Kothandaraman col. 7, Ins. 24-28) and “Al models employed within transform module 224 may be trained through a series of machine learning techniques applied to an algorithm using, for example, previously determined responses and categorizations of incidents raised within the RPA system” (Kothandaraman col. 9, Ins. 9-14). Even if Kothandaraman’s “previously determined responses’ or “categorizations of incidents” are considered to correspond to the claimed “error patterns” (a correspondence, which, as shown above, has not been demonstrated), Kothandaraman nowhere discloses or suggests that its “previously determined responses’ or “categorizations of incidents” are “extract[ed]...from the converted historic unstructured log data,” as recited by claim 1 (emphasis added). Liao and Mahamuni are cited against other subject matter recited by claim 1, and do not appear to cure this deficiency of Kothandaraman. Accordingly, claim 1 is distinct from the cited references for this second reason. Examiner’s Response: The Examiner respectfully disagrees. As can be seen in the rejection to claim 1 above, the Examiner has not relied solely on Kothandaraman to disclose the Applicant’s claim limitation of “extracting, by the processor, error patterns from the converted historic unstructured log data”. It is through the combination of Kothandaraman and Liao that disclose this limitation. Specifically, Liao was used to modify the “historic log data” disclosed by Kothandaraman to be “converted historic unstructured log data” and thus, through the combination of the references, disclose that the “extracting” step, as taught by Kothandaraman, would then include “converted historic unstructured log data”. Therefore, for at least the reasons set forth above, the rejection made under 35 U.S.C. §103 is proper and thus, maintained. In the Remarks, Applicant argues: The Office Action agrees that Kothandaraman and Liao fail to teach, but alleges that Mahamuni discloses, the claimed “computing...a dynamic threshold value corresponding to each of the one or more observation metrics using the trained one or more machine learning models.” Office Action at page 9. Mahamuni does mention "using dynamic thresholds that represent bounds of an expected data range for particular datapoints being measured during anomaly detection." Mahamuni [0099]. These batch-level processing thresholds, however, do not constitute or suggest the "dynamic threshold value corresponding to each of the one or more observation metrics" where the metrics are "of one or more resources in the RPA platform," as recited by claim 1 (emphases added). Accordingly, claim 1 is distinct from the cited references for this third reason. Examiner’s Response: The Examiner respectfully disagrees. As can be seen in the §103 rejection to claim 1 above, the Examiner has not relied upon Mahamuni to disclose “one or more resources in the RPA platform”. This limitation was disclosed in primary reference Kothandaraman. Mahamuni was used to modify Kothandaraman’s disclosure of “one or more observation metrics” to include “a dynamic threshold value”. Therefore, it is through the combination of the references that disclose the Applicant’s claim limitation of “computing…a dynamic threshold value corresponding to each of the one or more observation metrics using the trained one or more machine learning models” where the metrics are “of one or more resources in the RPA platform”. Therefore, for at least the reasons set forth above, the rejection made under 35 U.S.C. §103 is proper and thus, maintained. In the Remarks, Applicant argues: Claim 3 depends from claim 1 and further recites "parsing, by the processor, a metric message to extract values of the one or more observation metrics from structured data within the metric message" (as amended). The Examiner alleges Kothandaraman teaches parsing, citing that "an adaptor 122 extracts set data from a source bot 112 or a process executing the bot 112, verifies and validates the information, (parsing a metric message) transform to the information according to a common data model." Office Action at page 10. However, Kothandaraman's extraction of “set data from a source bot 112 or a process executing the bot 112” does not constitute or suggest the claimed “parsing, by the processor, a metric message to extract values of the one or more observation metrics from structured data within the metric message,” as recited by amended claim 3 (emphases added). Liao and Mahamuni do not appear to cure this deficiency of Kothandaraman. Accordingly, claim 3 is distinct from the cited references and Applicant respectfully requests withdrawal of the rejection. Examiner’s Response: The Examiner respectfully disagrees. As can be seen in the updated §103 rejection to claim 3, it is the Examiner’s position that Kothandaraman discloses the Applicant’s claim amendment of "parsing, by the processor, a metric message to extract values of the one or more observation metrics from structured data within the metric message". Specifically, Kothandaraman discloses that the received information may include incident descriptions structured or formatted (according to the specifications of the RPA product employed to build the RPA system) (see Column 11, lines 13-16). The received information can be reasonably interpreted as the Applicant’s “metric message”. Further, Kothandaraman discloses that this information is processed by adaptors to verify and validate the information and then transforms this information according to a common data model (see Column 5, lines 41-48). This verification/validation and/or transformation of the information can be reasonably interpreted as the Applicant’s “parsing” and “extract values of one or more observation metrics”. Therefore, for at least the reasons set forth above, the rejection made under 35 U.S.C. §103 is proper and thus, maintained. In the Remarks, Applicant argues: Claims 6, 8, 11, and 13 are system and computer readable medium claims corresponding to method claims 1 and 3. For the same reasons discussed above with respect to claims 1 and 3, the combination of Kothandaraman, Liao, and Mahamuni fails to teach or suggest all elements of these claims. Examiner’s Response: Please see response to arguments above with respect to claims 1 and 3. In the Remarks, Applicant argues: Applicant respectfully traverses the rejection of claims 2, 7, and 12 under 35 U.S.C. §103 as being unpatentable over Kothandaraman in view of Liao and Mahamuni et and further in view of Uriel. No prima facie case of obviousness has been established for at least the reason that the Office Action has neither properly determined the scope and content of the prior art nor properly ascertained the differences between the prior art and the claimed combinations. Claim 2 depends from claim 1 and further recites "querying, by the processor, the one or more observation metrics of the one or more resources from the database and at least one script associated with each of the one or more observation metrics from a script repository" and "executing, by the processor, the at least one script to fetch the values of the one or more observation metrics and one or more current metric values from the one or more resources" (as amended). The Examiner cites Uriel as teaching "a list of monitoring operations, which can include executing a script (at least one script), are stored in a resource allocation database (script repository)." Office Action at page 14. However, Uriel’s monitoring operations, even if they are considered to constitute “at least one script,” are not “associated with each of the one or more observation metrics" as claimed. Uriel states that its “list of monitoring operations to be performed” are part of a “monitoring level.” See Uriel [0030] (“These monitoring operations are associated with monitoring levels for compute nodes, not with individual observation metrics.”) As shown in Uriel's FIG. 4, monitoring operations like "processor utilization” or "memory utilization" are assigned to monitoring levels (e.g., level 0, 1, or 2), and do not correspond to scripts that can be executed to “fetch the values of the one or more observation metrics and one or more current metric values from the one or more resources.” Examiner’s Response: Examiner respectfully disagrees. Applicant argues that Uriel’s “monitoring operations” are not “associated with each of the one or more observation metrics”. However, Uriel discloses that monitoring operations are executed (via a script) to monitor various resource utilizations such as processor utilization, memory utilization, network latency, etc. (see Paragraphs 22 and 28). Each of these resource utilizations can be reasonably considered “values of the one or more observation metrics and one or more current metric values” which are obtained through the execution of the monitoring operation/script. The current claim language does not differentiate “observation metrics” and “current metric values”. In the Remarks, Applicant argues: Claims 7 and 12 are system and computer readable medium claims corresponding to method claim 2. For the same reasons discussed above with respect to claim 2, the combination of Kothandaraman, Liao, Mahamuni, and Uriel fails to teach or suggest scripts associated with each observation metric that retrieve current metric values from resources. Accordingly, Applicant respectfully requests withdrawal of the rejection of claims 7 and 12. Examiner’s Response: Please see response to arguments above with respect to claim 2. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LANNY N UNG whose telephone number is (571)270-7708. The examiner can normally be reached Mon-Thurs 6am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LANNY N UNG/ Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Dec 17, 2021
Application Filed
Oct 08, 2024
Non-Final Rejection — §103
Jan 15, 2025
Response Filed
Mar 01, 2025
Final Rejection — §103
Jun 05, 2025
Request for Continued Examination
Jun 09, 2025
Response after Non-Final Action
Aug 15, 2025
Non-Final Rejection — §103
Nov 14, 2025
Response Filed
Jan 15, 2026
Final Rejection — §103
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547527
INTELLIGENT CUSTOMER SERVICE REQUEST PROCESSING MECHANISM
2y 5m to grant Granted Feb 10, 2026
Patent 12481500
ACCELERATING LINEAR ALGEBRA KERNELS FOR ANY PROCESSOR ARCHITECTURE
2y 5m to grant Granted Nov 25, 2025
Patent 12474919
FIRMWARE DISTRIBUTION METHOD FOR AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Nov 18, 2025
Patent 12468519
SYSTEMS AND METHODS FOR IN-PLACE APPLICATION UPGRADES
2y 5m to grant Granted Nov 11, 2025
Patent 12461845
SYSTEM AND METHOD FOR DETECTING SOFTWARE TESTS THAT ARE SUSPECTED AS TESTS THAT ALWAYS PROVIDE FALSE POSITIVE
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+25.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 495 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month