Prosecution Insights
Last updated: April 19, 2026
Application No. 18/776,620

OPERATIONAL METRIC SUPPORT ON A CLOUD-BASED OBSERVABILITY PLATFORM

Non-Final OA §101§102§103
Filed
Jul 18, 2024
Examiner
BHUYAN, MOHAMMAD SOLAIMAN
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Cisco Technology Inc.
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
137 granted / 164 resolved
+28.5% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 164 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 29 December 2025 has been entered. Accordingly, claims 1-4, 6-14 and 16-22 are pending in this application. Claims 1, 11 and 20 are currently amended; claims 2-4, 6-10, 12-14 and 16-19 are original; claims 21-22 are newly added; claims 5 and 15 cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-4, 6-14 and 16-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites a series of steps and therefore, is a process. As such, claim 1 is one of the statutory categories. The claim recites “providing, by the device, the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute; generating, by the device and based on the configuration, an operational metric measurement corresponding to the operational metric attribute and one or more dimensions corresponding to the dimension attributes, wherein at least one of the plurality of OpenTelemetry spans associated with the monitored transaction has a respective error status, and wherein the operational metric measurement is generated based on the respective error status; and providing, by the device, the operational metric measurement for at least one of the one or more dimensions for operational analysis.” which is merely a concept can be performed in the human mind. Human mind through observation can generate operational metric measurement from metrics data with a pen and paper. These limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. Thus, the claim recites a mental process and are directed to a judicial exception. This judicial exception is not integrated into a practical application because the claim recites the additional element: “obtaining, by a device, operational attributes across a plurality of OpenTelemetry spans associated with a monitored transaction over a network”, which indicates using the internet to gather data. As such, this limitation amounts to mere data gathering in conjunction with a law of nature or abstract idea that the courts have found to be insignificant extra-solution activity. See MPEP 2106.05(g). Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., obtaining by a device) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). See MPEP 2106.05(f). The addition of insignificant extra-solution activity does not amount to an inventive concept. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. See MPEP 2016.05(h). The claim 1 is directed to the abstract idea. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The obtaining step was considered to be extra-solution activity in Step 2A, and it is not more than what is well-understood, routine, conventional activity in the field. The Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicate that receiving or transmitting data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that the obtaining step is well-understood, routine, conventional activity is supported under Berkheimer Option 2. Thus, the claim 1 is ineligible. As such, the independent claims 11 and 20 are also directed to the abstract idea for the same reason as the claim 1 above. Regarding claims 2 and 12, claims further recite the limitations “the configuration is based on selections input to predefined templates configured for one or more of a sum operation, an average operation, or a count operation”, which merely describes mathematical calculation in conjunction with law of nature or abstract idea. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. The claims thus just further describe the abstract idea without adding significantly more or applying the abstract idea to a practical application. See MPEP 2106.05(g). Thus, the claims are directed to the abstract idea. Regarding claims 3 and 13, claims further recite the limitations “the operational metric measurement is generated based on a user-defined relationship between the operational metric measurement and the operational attributes”, which further recites mental processes and are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 4 and 14, claims further recite the limitations “obtaining a rule configuration defining an evaluation condition for determining whether the operational metric measurement is within an expected range; and evaluating the operational metric measurement based on the rule configuration”, which further recites mental processes and are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 6, 16 and 21, claims further recite the limitations “generating the operational metric measurement includes: generating a combination of operational metric measurements including operational metric measurements from a plurality of monitoring transactions; and excluding from the combination of operational metric measurements all of the operational metric measurements having an associated span with an error status”, which further recites mental processes and are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 7, 17 and 22, claims further recite the limitations “generating the operational metric measurement includes generating a combination of operational metric measurements including only operational metric measurements having an associated span with an error status”, which further recites mental processes and are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 8 and 18, claims further recite the limitations “generating the operational metric measurement includes generating a graphical representation of the operational metric attribute across the one or more dimensions” which indicates use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., generating a graphical representation of the operational metric attribute) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 9 and 19, claims further recite the limitations “the graphical representation is configurable to display selectable dimensional segments of the operational metric attribute defined in the configuration”, which indicates use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., the graphical representation is configurable to display) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claim 10, claim further recites the limitations “labeling the operational metric measurement with a user-configured label”, which further recites mental processes and are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claim is ineligible. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1, 3, 6-8, 11, 13, 16-18 and 20-22 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Danyi et al. (US 11,789,804 B1) hereinafter Danyi. As to claim 1, Danyi discloses a method, comprising: obtaining, by a device, operational attributes across a plurality of OpenTelemetry spans (Col. 8 lines 49-54) associated with a monitored transaction over a network (Fig. 23, Col. 10 lines 1-3, the monitoring service 306 receives and analyzes the span data for monitoring and troubleshooting purposes, i.e., obtaining, by a device, operational attributes across a plurality of OpenTelemetry spans associated with a monitored transaction over a network. Col. 43 lines 49-52, “a plurality of spans associated with a microservices-based application are automatically ingested and sessionized into a plurality of traces. The monitoring platform is able to ingest all the incoming spans without sampling.”.); providing, by the device, the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute (Col. 28 lines 52-67, “FIG. 12 illustrates on-screen displays that represent exemplary categories of dimensions across which SLIs may be computed, in accordance with implementations of the monitoring service disclosed herein. The exemplary category of dimensions corresponds to the categories associated with drop-down menus (e.g., 1030, 1032, 1034 and 1036) discussed in connection with FIG. 10. The metrics data aggregated using the metric event modality allows clients to easily and rapidly compute measurements across various cross-combinations of attributes. Drop-down on-screen menu 1230, corresponding to workflow, illustrates different workflows specific to the application discussed in connection with FIG. 9. A "workflow" is a type of dimension of the request that was processed; a workflow may be conceptualized as a type of "global tag" that is attributed to each span in a given trace.”. Thus, providing, by the device, the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute.); generating, by the device and based on the configuration, an operational metric measurement corresponding to the operational metric attribute and one or more dimensions corresponding to the dimension attributes (Fig. 12, Col. 10 lines 64-67-col. 11 lines 1-2, the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, i.e., generating, by the device and based on the configuration, an operational metric measurement, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. Col. 26 lines 38-41, “The measurements are often aggregated over a measurement window using the metrics data associated with the metric events modality and then turned into a rate, average, or percentile.”. Col. 28 lines 37-44, “The categories of dimensions across which the SLIs may be computed, include, but are not limited to, workflow 1030, environment 1032, incident 1034 and tenant-level 1036. Each of the categories comprises a drop-down menu with options for the different dimensions. The metrics events data allows clients to easily and rapidly compute measurements across various cross-combinations of tags or attributes.”.), wherein at least one of the plurality of OpenTelemetry spans associated with the monitored transaction has a respective error status, and wherein the operational metric measurement is generated based on the respective error status (Col. 13 lines 3-13, “Additionally, because incoming trace and span information may be efficiently ingested and aggregated in real time, a monitoring platform is able to advantageously convey meaningful and accurate information regarding throughput, latency and error rate (without the need for sampling) for the services in the microservices-based application.”. Col. 27 lines 9-16, “the pop-up window 1008 also provides the client information pertaining to SLIs related to Errors 1012. In the example of FIG. 10, the pop-up window 1008 provides information regarding the error rate and the total number of errors that occurred during the specified time duration. The client is also provided information regarding what percentage of the total number of requests resulted in errors.”. Col. 8 lines 49-54, “Other common open source instrumentation specifications include OPENTELEMETRY and OpenCensus. Each span may be annotated with one or more tags that provide context about the execution, such as the client instrumenting the software, a document involved in the request, an infrastructure element used in servicing a request, etc.”. Thus, the operational metric measurement is generated based on the respective error status.); and providing, by the device, the operational metric measurement for at least one of the one or more dimensions for operational analysis (Col. 10 lines 64-67-col. 11 lines 1-11, “the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. The query engine and reporting system 324 may, for example, interact with the instrumentation analysis system 322 to generate a visualization, e.g., a histogram or an application topology graph (referred to interchangeably as a "service graph" or a "service map" herein) to represent information regarding the traces and spans received from a client. Alternatively, the query engine and reporting system 324 may be configured to respond to specific statistical queries submitted by a developer regarding one or more services within a client's application”. Thus, providing, by the device, the operational metric measurement for at least one of the one or more dimensions for operational analysis.). As to claim 11, Danyi discloses an apparatus, comprising: one or more network interfaces for connection to a computer network; a processor coupled to the one or more network interfaces and configured to execute one or more processes; a user interface; and a memory configured to store a process that is executable by the processor (Col. 10 lines 1-20), the process when executed being configured to: obtain operational attributes across a plurality of OpenTelemetry spans (Col. 8 lines 49-54) associated with a monitored transaction over a network (Fig. 23, Col. 10 lines 1-3, the monitoring service 306 receives and analyzes the span data for monitoring and troubleshooting purposes, i.e., obtaining, by a device, operational attributes across a plurality of OpenTelemetry spans associated with a monitored transaction over a network. Col. 43 lines 49-52, “a plurality of spans associated with a microservices-based application are automatically ingested and sessionized into a plurality of traces. The monitoring platform is able to ingest all the incoming spans without sampling.”.); provide the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute (Col. 28 lines 52-67, “FIG. 12 illustrates on-screen displays that represent exemplary categories of dimensions across which SLIs may be computed, in accordance with implementations of the monitoring service disclosed herein. The exemplary category of dimensions corresponds to the categories associated with drop-down menus (e.g., 1030, 1032, 1034 and 1036) discussed in connection with FIG. 10. The metrics data aggregated using the metric event modality allows clients to easily and rapidly compute measurements across various cross-combinations of attributes. Drop-down on-screen menu 1230, corresponding to workflow, illustrates different workflows specific to the application discussed in connection with FIG. 9. A "workflow" is a type of dimension of the request that was processed; a workflow may be conceptualized as a type of "global tag" that is attributed to each span in a given trace.”. Thus, providing, by the device, the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute.); generate, based on the configuration, an operational metric measurement corresponding to the operational metric attribute and one or more dimensions corresponding to the dimension attributes (Fig. 12, Col. 10 lines 64-67-col. 11 lines 1-2, the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, i.e., generating, by the device and based on the configuration, an operational metric measurement, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. Col. 26 lines 38-41, “The measurements are often aggregated over a measurement window using the metrics data associated with the metric events modality and then turned into a rate, average, or percentile.”. Col. 28 lines 37-44, “The categories of dimensions across which the SLIs may be computed, include, but are not limited to, workflow 1030, environment 1032, incident 1034 and tenant-level 1036. Each of the categories comprises a drop-down menu with options for the different dimensions. The metrics events data allows clients to easily and rapidly compute measurements across various cross-combinations of tags or attributes.”.), wherein at least one of the plurality of OpenTelemetry spans associated with the monitored transaction has a respective error status, and wherein the operational metric measurement is generated based on the respective error status (Col. 13 lines 3-13, “Additionally, because incoming trace and span information may be efficiently ingested and aggregated in real time, a monitoring platform is able to advantageously convey meaningful and accurate information regarding throughput, latency and error rate (without the need for sampling) for the services in the microservices-based application.”. Col. 27 lines 9-16, “the pop-up window 1008 also provides the client information pertaining to SLIs related to Errors 1012. In the example of FIG. 10, the pop-up window 1008 provides information regarding the error rate and the total number of errors that occurred during the specified time duration. The client is also provided information regarding what percentage of the total number of requests resulted in errors.”. Col. 8 lines 49-54, “Other common open source instrumentation specifications include OPENTELEMETRY and OpenCensus. Each span may be annotated with one or more tags that provide context about the execution, such as the client instrumenting the software, a document involved in the request, an infrastructure element used in servicing a request, etc.”. Thus, the operational metric measurement is generated based on the respective error status.); and provide the operational metric measurement for at least one of the one or more dimensions for operational analysis (Col. 10 lines 64-67-col. 11 lines 1-11, “the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. The query engine and reporting system 324 may, for example, interact with the instrumentation analysis system 322 to generate a visualization, e.g., a histogram or an application topology graph (referred to interchangeably as a "service graph" or a "service map" herein) to represent information regarding the traces and spans received from a client. Alternatively, the query engine and reporting system 324 may be configured to respond to specific statistical queries submitted by a developer regarding one or more services within a client's application”. Thus, providing, by the device, the operational metric measurement for at least one of the one or more dimensions for operational analysis.). As to claim 20, Danyi discloses a tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising (Col. 10 lines 1-20): obtaining operational attributes across a plurality of OpenTelemetry spans (Col. 8 lines 49-54) associated with a monitored transaction over a network (Fig. 23, Col. 10 lines 1-3, the monitoring service 306 receives and analyzes the span data for monitoring and troubleshooting purposes, i.e., obtaining, by a device, operational attributes across a plurality of OpenTelemetry spans associated with a monitored transaction over a network. Col. 43 lines 49-52, “a plurality of spans associated with a microservices-based application are automatically ingested and sessionized into a plurality of traces. The monitoring platform is able to ingest all the incoming spans without sampling.”.); providing the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute (Col. 28 lines 52-67, “FIG. 12 illustrates on-screen displays that represent exemplary categories of dimensions across which SLIs may be computed, in accordance with implementations of the monitoring service disclosed herein. The exemplary category of dimensions corresponds to the categories associated with drop-down menus (e.g., 1030, 1032, 1034 and 1036) discussed in connection with FIG. 10. The metrics data aggregated using the metric event modality allows clients to easily and rapidly compute measurements across various cross-combinations of attributes. Drop-down on-screen menu 1230, corresponding to workflow, illustrates different workflows specific to the application discussed in connection with FIG. 9. A "workflow" is a type of dimension of the request that was processed; a workflow may be conceptualized as a type of "global tag" that is attributed to each span in a given trace.”. Thus, providing, by the device, the operational attributes for configuration as an operational metric attribute and as dimension attributes corresponding to the operational metric attribute.); generating, based on the configuration, an operational metric measurement corresponding to the operational metric attribute and one or more dimensions corresponding to the dimension attributes (Fig. 12, Col. 10 lines 64-67-col. 11 lines 1-2, the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, i.e., generating, by the device and based on the configuration, an operational metric measurement, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. Col. 26 lines 38-41, “The measurements are often aggregated over a measurement window using the metrics data associated with the metric events modality and then turned into a rate, average, or percentile.”. Col. 28 lines 37-44, “The categories of dimensions across which the SLIs may be computed, include, but are not limited to, workflow 1030, environment 1032, incident 1034 and tenant-level 1036. Each of the categories comprises a drop-down menu with options for the different dimensions. The metrics events data allows clients to easily and rapidly compute measurements across various cross-combinations of tags or attributes.”.), wherein at least one of the plurality of OpenTelemetry spans associated with themonitored transaction has a respective error status, and wherein the operational metric measurement is generated based on the respective error status (Col. 13 lines 3-13, “Additionally, because incoming trace and span information may be efficiently ingested and aggregated in real time, a monitoring platform is able to advantageously convey meaningful and accurate information regarding throughput, latency and error rate (without the need for sampling) for the services in the microservices-based application.”. Col. 27 lines 9-16, “the pop-up window 1008 also provides the client information pertaining to SLIs related to Errors 1012. In the example of FIG. 10, the pop-up window 1008 provides information regarding the error rate and the total number of errors that occurred during the specified time duration. The client is also provided information regarding what percentage of the total number of requests resulted in errors.”. Col. 8 lines 49-54, “Other common open source instrumentation specifications include OPENTELEMETRY and OpenCensus. Each span may be annotated with one or more tags that provide context about the execution, such as the client instrumenting the software, a document involved in the request, an infrastructure element used in servicing a request, etc.”. Thus, the operational metric measurement is generated based on the respective error status.); and providing the operational metric measurement for at least one of the one or more dimensions for operational analysis (Col. 10 lines 64-67-col. 11 lines 1-11, “the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. The query engine and reporting system 324 may, for example, interact with the instrumentation analysis system 322 to generate a visualization, e.g., a histogram or an application topology graph (referred to interchangeably as a "service graph" or a "service map" herein) to represent information regarding the traces and spans received from a client. Alternatively, the query engine and reporting system 324 may be configured to respond to specific statistical queries submitted by a developer regarding one or more services within a client's application”. Thus, providing, by the device, the operational metric measurement for at least one of the one or more dimensions for operational analysis.). As to claims 3 and 13, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Danyi discloses wherein the operational metric measurement is generated based on a user-defined relationship between the operational metric measurement and the operational attributes (Col. 29 lines 17-29, “SLIs may be computed for each attribute of the categories in FIG. 12 and also for each combination of attributes associated with the categories. In an implementation, for each combination of attributes selected using one or more of the drop-down menus, the client may be able determine the computed SLIs (e.g., by hovering a cursor over the various nodes and edges of the graph after the dimensions have been selected using, for example, the drop-down menus shown in FIG. 10). In this way, implementations of the monitoring service disclosed herein enable a client to use the metric events modality to slice the application topology graph across several different attributes.”. Col. 5 lines 49-52, “The term "tags" as used herein generally refers to key: value pairs that provide further context regarding the execution environment and enable user-defined annotation of spans in order to query, filter and comprehend trace data.”. Thus, the operational metric measurement is generated based on a user-defined relationship between the operational metric measurement and the operational attributes.). As to claims 6, 16 and 21, the claims are rejected for the same reasons as claims 1, 11 and 20 above. In addition, Danyi discloses wherein generating the operational metric measurement includes: generating a combination of operational metric measurements including operational metric measurements from a plurality of monitoring transactions (Fig. 12, Col. 10 lines 64-67-col. 11 lines 1-2, the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, i.e., generating the operational metric measurement, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. Col. 26 lines 38-41, “The measurements are often aggregated over a measurement window using the metrics data associated with the metric events modality and then turned into a rate, average, or percentile.”.); and excluding from the combination of operational metric measurements all of the operational metric measurements having an associated span with an error status. (Fig. 19, Col. 18, lines 64-67, col. 19 lines 1-17, “span identity may be represented as the following exemplary tuple: {operation, service, kind, isError, httpMethod, isServiceMesh}, where the operation field represents the name of the specific operation within a service that made the call, the service field represents the logical name of the service on which the operation took place, the kind field details relationships between spans and may either be a "server" or "client," the isError field is a "TRUE/FALSE" flag that indicates whether a span is an error span.”. Col. 23 lines 50-53, “the client may only collect data pertaining to select operations. In other words, the client may filter out data pertaining to select operations that are of less interest to a client.”. Col. 47 lines 12-21, “monitoring metrics associated with the metric time series modality may be computed for the “failure root cause” 2690 tag. In another implementation, high-cardinality metrics associated with the metric events modality may also be computed for the “failure root cause” 2690 tag, which provides clients the ability to filter, breakdown, aggregate and compute metrics associated with the tag across a number of different attributes as explained in connection with FIGS. 19 and 20.”. Thus, generating a combination of operational metric measurements including operational metric measurements from a plurality of monitoring transactions; and excluding from the combination of operational metric measurements all of the operational metric measurements having an associated span with an error status.). As to claims 7, 17 and 22, the claims are rejected for the same reasons as claims 1, 11 and 20 above. In addition, Danyi discloses wherein generating the operational metric measurement includes generating a combination of operational metric measurements including only operational metric measurements having an associated span with an error status. (Col. 37 lines 33-37, “the platform returns a list 1602 of the traces matching the client-entered filters and, further, provides information about the traces, e.g., the Trace ID, duration, start time, root operation, root cause error status code and associated spans.”. Col. 13 lines 3-13, “Additionally, because incoming trace and span information may be efficiently ingested and aggregated in real time, a monitoring platform is able to advantageously convey meaningful and accurate information regarding throughput, latency and error rate (without the need for sampling) for the services in the microservices-based application.”. Col. 27 lines 9-16, “the pop-up window 1008 also provides the client information pertaining to SLIs related to Errors 1012. In the example of FIG. 10, the pop-up window 1008 provides information regarding the error rate and the total number of errors that occurred during the specified time duration. The client is also provided information regarding what percentage of the total number of requests resulted in errors.”. Col. 23 lines 50-53, “the client may only collect data pertaining to select operations. In other words, the client may filter out data pertaining to select operations that are of less interest to a client.”. Thus, generating the operational metric measurement includes generating a combination of operational metric measurements including only operational metric measurements having an associated span with an error status.). As to claims 8 and 18, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Danyi discloses wherein generating the operational metric measurement includes generating a graphical representation of the operational metric attribute across the one or more dimensions (Col. 10 lines 64-67-col. 11 lines 1-8, the query engine and reporting system 324 within the monitoring service 306 may be configured to generate reports, i.e., generating, by the device and based on the configuration, an operational metric measurement, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. The query engine and reporting system 324 may, for example, interact with the instrumentation analysis system 322 to generate a visualization, e.g., a histogram or an application topology graph (referred to interchangeably as a "service graph" or a "service map" herein) to represent information regarding the traces and spans received from a client. Thus, generating the operational metric measurement includes generating a graphical representation of the operational metric attribute across the one or more dimensions.). 6. Claims 2, 4, 9-10, 12, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Danyi as applied above, in view of Bath et al. (previously presented) (US 2018/0089328 A1) hereinafter Bath. As to claims 2 and 12, the claims are rejected for the same reasons as claims 1 and 11 above. Danyi does not explicitly disclose wherein the configuration is based on selections input to predefined templates configured for one or more of a sum operation, an average operation, or a count operation. However, in the same field of endeavor, Bath discloses wherein the configuration is based on selections input to predefined templates configured for one or more of a sum operation, an average operation, or a count operation (Para. 172, “FIG. 9D illustrates an example graphical user interface screen 106 including a table of results 122 based on the selected criteria including splitting the rows by the "component" field. A column 124 having an associated count for each component listed in the table may be displayed that indicates an aggregate count of the number of times that the particular field-value pair (e.g., the value in a row) occurs in the set of events responsive to the initial search query.”. Para. 173, “Statistical analysis of other fields in the events associated with the ten most popular products have been specified as colunm values 132. A count of the number of successful purchases for each product is displayed in colunm 134. This statistics may be produced by filtering the search results by the product name, finding all occurrences of a successful purchase in a field within the events, and generating a total of the number of occurrences. A sum of the total sales is displayed in colunm 136, which is a result of the multiplication of the price and the number of successful purchases for each product.”. Thus, the configuration is based on selections input to predefined templates configured for one or more of a sum operation, an average operation, or a count operation.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Danyi by including count operation for generating performance metrics in order to find all occurrences of a successful purchase in a field within the events as disclosed by Bath (Para. 173). A column having an associated count for each component listed in the table may be displayed that indicates an aggregate count of the number of times that the particular field-value pair occurs in the set of events responsive to the initial search query (Para. 172). One of the ordinary skills in the art would have motivated to make this modification in order to filter search results and to perform statistical analysis on values extracted from specific fields in the set of events as suggested by Bath (Para. 173). As to claims 4 and 14, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Bath discloses further comprising: obtaining a rule configuration defining an evaluation condition for determining whether the operational metric measurement is within an expected range; and evaluating the operational metric measurement based on the rule configuration (Para.395, “the user specified metadata can include a threshold of a measure value for a particular metric, or a range of a measure value, or a preferred measure value for that metric. The metrics catalog can use these conditions to monitor metrics data in the metrics store, retrieve that metrics data for cataloging in the metrics catalog and, as such, make that monitored metrics data readily available for users via an in-memory system that avoids the need to access an in disk metrics store. In another example, the metadata can define a condition causing the display of an alert about a metric. As such, a user can be alerted when a measure value of a metric does or does not exceed a threshold value.”. Thus, obtaining a rule configuration defining an evaluation condition for determining whether the operational metric measurement is within an expected range; and evaluating the operational metric measurement based on the rule configuration.). As to claims 9 and 19, the claims are rejected for the same reasons as claims 8 and 18 above. In addition, Bath discloses wherein the graphical representation is configurable to display selectable dimensional segments of the operational metric attribute defined in the configuration (Para. 156, “After the search is executed, the search screen 70 in FIG. SA can display the results through search results tabs 76, wherein search results tabs 76 includes: an "events tab" that displays various information about events returned by the search; a "statistics tab" that displays statistics about the search results; and a "visualization tab" that displays various visualizations of the search results. The events tab illustrated in FIG. SA displays a timeline graph 78 that graphically illustrates the number of events that occurred in one-hour intervals over the selected time range. It also displays an events list 80 that enables a user to view the raw data in each of the returned events. It additionally displays a fields sidebar 81 that includes statistics about occurrences of specific fields in the returned events, including "selected fields" that are pre-selected by the user, and "interesting fields" that are automatically selected by the system based on pre-specified criteria.”. Thus, the graphical representation is configurable to display selectable dimensional segments of the operational metric attribute defined in the configuration.). As to claim 10, the claim is rejected for the same reasons as claim 1 above. In addition, Bath discloses further comprising: labeling the operational metric measurement with a user-configured label (Para. 173, the top ten product names ranked by price are selected as a filter 128 that causes the display of the ten most popular products sorted by price. Each row is displayed by product name and price 130. This results in each product displayed in a column labeled "product name" along with an associated price in a column labeled "price" 138, i.e., labeling the operational metric measurement with a user-configured label. Statistical analysis of other fields in the events associated with the ten most popular products have been specified as column values 132. A count of the number of successful purchases for each product is displayed in column 134. This statistics may be produced by filtering the search results by the product name, finding all occurrences of a successful purchase in a field within the events, and generating a total of the number of occurrences. A sum of the total sales is displayed in column 136, which is a result of the multiplication of the price and the number of successful purchases for each product.). Response to Arguments 7. Applicant arguments regarding the 101 rejections, applicant amended the independent claims which further indicates mental process. As such, the claims are further directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are directed to the abstract idea. Applicant’s arguments, see pages 12-15, filed on December 29, 2025, with respect to the rejection of claims 1-20 under 35 USC §103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found reference. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Karis et al. (US 12,222,840 B1) teaches generating span related metric data streams by analytic engine. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD SOLAIMAN BHUYAN whose telephone number is (571)272-7843. The examiner can normally be reached on Monday - Friday 9:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Charles Rones can be reached on 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571 -273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mohammad S Bhuyan/Examiner, Art Unit 2168 /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Jul 18, 2024
Application Filed
Mar 13, 2025
Non-Final Rejection — §101, §102, §103
Apr 10, 2025
Applicant Interview (Telephonic)
Apr 10, 2025
Examiner Interview Summary
Jun 13, 2025
Response Filed
Sep 18, 2025
Final Rejection — §101, §102, §103
Nov 20, 2025
Interview Requested
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Dec 29, 2025
Request for Continued Examination
Jan 03, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530370
INCREASING FAULT TOLERANCE IN A MULTI-NODE REPLICATION SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12517883
DATABASE INDEXING IN PERFORMANCE MEASUREMENT SYSTEMS
2y 5m to grant Granted Jan 06, 2026
Patent 12499136
METHOD FOR UPDATING A DATABASE OF A GEOLOCATION SERVER
2y 5m to grant Granted Dec 16, 2025
Patent 12493613
METHOD AND APPRATUS FOR PROVING A SHARED DATABASE CONNECTION IN A BATCH ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12493589
Efficient Construction and Querying Progress of a Concurrent Migration
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+22.8%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 164 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month