Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,315

COMPREHENSIVE TELEMETRY DATA MANAGEMENT IN CLUSTER NETWORKS WITH DYNAMIC DATASET REGISTRATION AND PROCESSING

Non-Final OA §103§112
Filed
Jul 31, 2024
Examiner
CHOUAT, ABDERRAHMEN
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
77%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
195 granted / 267 resolved
+15.0% vs TC avg
Minimal +4% lift
Without
With
+4.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
16 currently pending
Career history
283
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 267 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the claim recites “wherein the telemetry data can be previously generated data or new data”, after the data is received, examiner notes that the data must have been previously generated, examiner is unsure how received data, already generated data, can be new data. Claim 14 inherits the same rejection as claim 1 above. Regarding claims 1, 3, 5, 6, 7, 8, 9, 10, 14, 16, 17, and 19 recite “the network” which lacks antecedent basis in the claims and should recite “the cluster network.” Regarding claim 5 and 16 recite “the metric datasets” which lacks antecedent basis and should recite “metric datasets.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 11, 14, and 18, is/are rejected under 35 U.S.C. 103 as being unpatentable over Dubynskiy et al. (US 11516308 B1) in view of Garner, IV et al. (US 10547498 B1) hereinafter Garner. Regarding claim 1, Dubynskiy teaches a method of processing telemetry data in a cluster network having a plurality of nodes, comprising: (Col 1 Line 66 [Wingdings font/0xE0] Col 2 Line 23; method) receiving telemetry data from a plurality of telemetry producers, (Col 6 Line 50 [Wingdings font/0xE0] 52; The data ingestion layer 205 may be configured to receive telemetry data generated by various telemetry data sources.) wherein the received telemetry data is formatted into a structured format for storage in a central datastore, (Col 6 Line 62-67; The data ingestion layer 205 may store the received telemetry data in a telemetry database 305. The data ingestion layer 205 may store the telemetry data in a format received by the telemetry processing service 110 or may be configured to reformat the telemetry data to another format for storage in the telemetry database 305.) wherein the telemetry data can be previously generated data or new data; (Examiner respectfully notes that all telemetry data that is received by Dubynskiy is previously generated) validating (administrator) one or more consumers (administrator requesting its collection) of respective datasets (sampled) of the telemetry data (telemetry) in the network (Col 4 Lines 9-26; network running the telemetry system), (FIG. 5 is a diagram of an example user interface 500 for customizing the parameters used by the adaptive sampling framework 210. The user interface 500 may also provide a visualization of the potential cost savings that may be achieved by implementing the adaptive sampling technique using the parameters selected on the user interface 500. The user interface 500 may be presented on a display of the client device 105 of an administrator, such as an administrator of the cloud-based service 125 a or 125 b. The user interface may be presented by a native application on the client device 105 and/or may be accessed using a web browser on the client device 105.; Examiner notes that administrators are validated under normal network security practices otherwise anyone would have admin controls without proper validation) and transmitting the respective data to consumers through a selected transport mechanism. (Col 6 Lines 26-46: (23) The data visualization layer 215 provides a means for viewing the adaptive sampling rate and cost estimate data generated by the adaptive sampling framework 210. The data visualization layer may also provide a means for monitoring estimates of various metrics recovered from the telemetry data obtained using adaptive sampling. The data visualization layer 215 may be implemented using a variety of existing visualization tools and/or visualization libraries. The adaptive sampling framework 210 may provide an application programming interface (API) that the data visualization layer 215 may utilize to provide users with a means for interacting with the adaptive sampling framework 210. In some implementations, the data visualization layer 215 may be implemented using Microsoft Power BI. Other tools may be used to implement the data visualization layer 215. Alternatively, the data visualization layer 215 may be implemented as a website or web application that is configured to utilized one or more visualization libraries to provide a user interface for interacting with the adaptive sampling framework 210.) Dubynskiy does not explicitly teach wherein the one or more consumers subscribe to receive the respective data through a subscription process; and transmitting the respective data to subscribed consumers through a selected transport mechanism. In an analogous art Garner teaches wherein the one or more consumers subscribe (user subscribes) to receive the respective data (alert data) through a subscription process (subscription); (Col 5 Lines 28-44; A command to enter configuration mode is received at 202. The smart alert system 118 receives the command from the user device 116 via the network 114. The command is initiated by the subscriber 136. The command received at 202 instructs the smart alert system 118 to enter a configuration mode. For example, the command may be received in response to a request from the subscriber 136 to register as a user of the smart alert system 118 or to update an existing subscriber profile associated with the subscriber 136. While in the configuration mode, the smart alert system 118 can be programmed to communicate with IoT devices 112, to interpret data received from IoT devices 112, to create or update subscriber profiles based on subscriber information (e.g., contact information, subscriber characteristics, associated IoT devices 112, etc.), and to create or update alert hierarchy information. In response to the command at 202, the smart alert system 118 enters the configuration mode.) and transmitting the respective data to subscribed consumers through a selected transport mechanism. (Col 11 Lines 24-45; For example, the subscriber 136 can program the alert distribution time period to be once an hour, once a day, once a week, and so on. The subscriber 136 can also program different alert distribution time periods for different sets of alerts. In some arrangements, the subscriber 136 can provide the smart alert system 118 access to a calendar of the subscriber 136 (e.g., Outlook® or Google® calendar) such that the smart alert system can automatically determine the optimal batch alert delivery time. In such arrangements, the alert distribution time period may not be a consistent time period but may fluctuate based on the subscriber's schedule. For example, the smart alert system 118 can avoid sending alerts while the subscriber 136 is unable to receive the alerts (e.g., while the subscriber 136 is on a plane, while the subscriber 136 is seeing a movie, while the subscriber 136 is in a business meeting, etc.). During the initial cycle of the method 500, the smart alert system 118 will not determine that the time period has expired at 502. As discussed in further detail below, at a later point in the method 500, the smart alert system 118 may determine that the time period has expired at 502.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy] to include [a subscription process for users to receive alert data] as is taught by [Garner]. The suggestion/motivation for doing so is to [manage event and device communication Col 1 Background]. Regarding claim 2, Dubynskiy in view of Garner teach the method of claim 1 and is disclosed above, Dubynskiy further teaches and wherein the telemetry data comprises performance data, (Col 6 Lines 50-61; The data ingestion layer 205 may be configured to receive telemetry data generated by various telemetry data sources. The telemetry processing service 110 may be configured to receive telemetry data associated with various types of events associated with one or more software products operating on the telemetry data sources. The telemetry data may include information that may be used to assess user usage patterns, the quality of software builds that have been deployed, to identify issues with the deployed builds, to make release decisions for future builds to be deployed, and other information regarding the performance of the software builds that have been deployed for client use.) topology information, (Examiner notes this is an OR limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) alerts, (Examiner notes this is an OR limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) security states, (Examiner notes this is an OR limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) and service features, (Examiner notes this is an OR limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) and can comprise discrete datasets or streaming telemetry data, (Examiner notes this is an OR limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) and further wherein the one or more consumers comprises at least one of: pod components of the nodes, (Examiner notes this is an “at least on of” limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) storage users, (Examiner notes this is an “at least on of” limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) graphical user interfaces (GUI), (Col 6 Lines 26-46: (23) The data visualization layer 215 provides a means for viewing the adaptive sampling rate and cost estimate data generated by the adaptive sampling framework 210. The data visualization layer may also provide a means for monitoring estimates of various metrics recovered from the telemetry data obtained using adaptive sampling. The data visualization layer 215 may be implemented using a variety of existing visualization tools and/or visualization libraries. The adaptive sampling framework 210 may provide an application programming interface (API) that the data visualization layer 215 may utilize to provide users with a means for interacting with the adaptive sampling framework 210. In some implementations, the data visualization layer 215 may be implemented using Microsoft Power BI. Other tools may be used to implement the data visualization layer 215. Alternatively, the data visualization layer 215 may be implemented as a website or web application that is configured to utilized one or more visualization libraries to provide a user interface for interacting with the adaptive sampling framework 210.) and storage vendors. (Examiner notes this is an “at least on of” limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) Dubynskiy does not explicitly teach wherein the telemetry data comprises data generated periodically by each producer upon operation in the cluster network, In an analogous art Garner teaches the telemetry data comprises data generated periodically by each producer (the iot device generates an alert when an alert condition occurs see Col 8 Lines 12-47) upon operation in the cluster network (networked iot devices) (Col 11 Lines 11-18: As described in further detail, during method 500, the smart alert system 118 communicates data to and from various devices (e.g., IoT devices 112, subscriber devices 116 or 140, etc.) via network 114. The method 500 is a repeating, cyclical method that is performed every alert distribution time period ( e.g., every day, every hour, etc.). Col 11 Lines 46-56; The smart alert system 118 receives alert information from at least one IoT device 112 via the network 114 (e.g., at the routing and broadcast engine 132). In some arrangements, the information is received from a single IoT 50 device 112. In other arrangements, the information is received from a plurality of IoT devices 112. The information may be received directly from a given IoT device 112 via the network 114 or from a third-party service associated with a given IoT device 112 (e.g., a server associated with 55 IoT devices of a given manufacturer.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy] to include [providing event data when it occurs] as is taught by [Garner]. The suggestion/motivation for doing so is to [manage event and device communication Col 1 Background]. Regarding claim 3, Dubynskiy in view of Garner teach the method of claim 2 and is disclosed above, Dubynskiy does not explicitly teach but Garner teaches further comprising: receiving a schedule to receive the telemetry data on an individual basis from one or more consumers of respective data of the telemetry data in the network; (Col 11 Lines 24-45; For example, the subscriber 136 can program the alert distribution time period to be once an hour, once a day, once a week, and so on. The subscriber 136 can also program different alert distribution time periods for different sets of alerts. In some arrangements, the subscriber 136 can provide the smart alert system 118 access to a calendar of the subscriber 136 (e.g., Outlook® or Google® calendar) such that the smart alert system can automatically determine the optimal batch alert delivery time. In such arrangements, the alert distribution time period may not be a consistent time period but may fluctuate based on the subscriber's schedule. For example, the smart alert system 118 can avoid sending alerts while the subscriber 136 is unable to receive the alerts (e.g., while the subscriber 136 is on a plane, while the subscriber 136 is seeing a movie, while the subscriber 136 is in a business meeting, etc.). During the initial cycle of the method 500, the smart alert system 118 will not determine that the time period has expired at 502. As discussed in further detail below, at a later point in the method 500, the smart alert system 118 may determine that the time period has expired at 502.) and transmitting the respective data to the one or more consumers through a selected transport mechanism and at a respective frequency based on the received schedule. (Col 11 Lines 24-45; For example, the subscriber 136 can program the alert distribution time period to be once an hour, once a day, once a week, and so on. The subscriber 136 can also program different alert distribution time periods for different sets of alerts. In some arrangements, the subscriber 136 can provide the smart alert system 118 access to a calendar of the subscriber 136 (e.g., Outlook® or Google® calendar) such that the smart alert system can automatically determine the optimal batch alert delivery time. In such arrangements, the alert distribution time period may not be a consistent time period but may fluctuate based on the subscriber's schedule. For example, the smart alert system 118 can avoid sending alerts while the subscriber 136 is unable to receive the alerts (e.g., while the subscriber 136 is on a plane, while the subscriber 136 is seeing a movie, while the subscriber 136 is in a business meeting, etc.). During the initial cycle of the method 500, the smart alert system 118 will not determine that the time period has expired at 502. As discussed in further detail below, at a later point in the method 500, the smart alert system 118 may determine that the time period has expired at 502.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy] to include [a subscription process for users to receive alert data] as is taught by [Garner]. The suggestion/motivation for doing so is to [manage event and device communication Col 1 Background]. Regarding claim 11, The method of claim 2 further comprising optimizing processing of the streaming data by: defining an epoch collating the streaming data into datasets; merging data tables storing datasets for the epoch for a plurality of producers to optimize storage space; and detecting identical data sent by the plurality of producers for the epoch and preventing storage of the identical data to optimize network bandwidth. (Examiner respectfully notes this claim depends on “streaming data” in claim 2, “streaming data” in claim 2 part of an list modified by “or” indicating alternatives necessitating only one element be elected. Dubynskiy teaches performance data, and therefore teaches performance data OR “streaming data.” Therefore “streaming data” was not elected in claim 2 and therefore claim 11 is also not elected.) Regarding claim 14, the claim inherits the same rejection as claims 1-3 above for reciting similar limitations in the form of a system claim (Col 1 Lines 38-50 system) Regarding claim 18, the claim inherits the same rejection as claims 11 above for reciting similar limitations in the form of a system claim (Col 1 Lines 38-50 system) Claim(s) 5, 8, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dubynskiy et al. (US 11516308 B1) in view of Garner, IV et al. (US 10547498 B1) hereinafter Garner, and further in view of Gupta et al. (US 20220164468 A1). Regarding claim 5, Dubynskiy in view of Garner teach the method of claim 2 and is disclosed above, Dubynskiy does not explicitly teach but Garner teaches further comprising defining privileges (access or no access) of users (subscribers) to receive the metric datasets (alerts; authenticating the subscriber according to profile and subscription information by the alert system), and utilizing an identity and access manager (IAM) service (username and password). (Col 5 Lines 15-26 and 55-63; While performing the method 200, the smart alert system 118 communicates data to and from various devices (e.g., IoT devices 112, the user device 116, etc.) via the network 114. As described in further detail below, the configuration includes creating a profile for the subscriber 136, pairing (i.e., configuring the smart alert system 112 to communicate with a given IoT device 112, including configuring communication protocols and authentication requirements) the smart alert system 118 with the various IoT devices 112 associated with the subscriber 136, and configuring the subscriber profile with alert preferences; The subscriber information relates to the subscriber 136. The subscriber information may include, for example, contact information relating to the subscriber 136 (e.g., address, phone number, e-mail address, etc.) and subscriber characteristics (e.g., age, physical capabilities, knowledge of device repair, etc.). In some arrangements, the subscriber information includes a user defined username and password that can be used by the smart alert system 118 to later authenticate the subscriber 136. It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy] to include [access rights to datasets] as is taught by [Garner]. The suggestion/motivation for doing so is to [manage event and device communication Col 1 Background]. Dubynskiy in view of Garner do not explicitly teach based on role-based access control (RBAC) rules derived from a hierarchy of levels of users in an organization using the network In an analogous art Gupta teaches defining privileges of users to receive the metric datasets ([0042] The entitlement service 622 may derive datasets with entitlement contracts based on the querying with the entitlement filters. The entitlement filters may determine the operation and data access (e.g., read only, administrative privileges, etc.) the user can perform. For example, if the user is logged in with read-only permissions on a derived dataset, the entitlement filter determines the user can access, but not update or delete the data. [0016] Various embodiments may provide one or more of the following technological improvements: 1) normalization between the tables provides the benefits of reusability, modularity, while creating the balance with the performance; 2) attributes are tagged at any level in any table providing a hybrid-model between Role-Based Access Control (RBAC) and Role-Based Access Control (ABAC); 3) roles are platform-level or application specific to meet the business needs where the same user coming in from different applications can have different roles/permissions for the same data; 4) roles are reused for multiple organizations/regions/modules within the same application (or platform for the platform-level roles); 5) a user at the same time represents multiple organizations/regions with independent multiple roles within the same application or platform; 6) a user has multiple roles within the same organization/region/module within an application or platform; 7) the entitlements for a user are combined using union of all the entitlements while maintaining the application and organizational/regional boundaries; 8) a user has different roles (independently) in each entitlement (e.g., organization or region) the user is representing; 9) the organizations are divided into further smaller sections using attributes (e.g., geographic regions) and/or hierarchal structures (e.g., division/departments); 10) the roles are at the platform level and application specific; 11) the roles are module specific within the application; and 12) a user is assigned different roles across entitlements. If a user has roles that have conflicting access to a data domain, the user can receive access to all or individual data domains. The access may provide the union of the permission by each data-domain and action, while maintaining organization, region boundaries, and granularity.) based on role-based access control (RBAC) rules derived from a hierarchy of levels of users in an organization using the network ([0016] Various embodiments may provide one or more of the following technological improvements: 1) normalization between the tables provides the benefits of reusability, modularity, while creating the balance with the performance; 2) attributes are tagged at any level in any table providing a hybrid-model between Role-Based Access Control (RBAC) and Role-Based Access Control (ABAC); 3) roles are platform-level or application specific to meet the business needs where the same user coming in from different applications can have different roles/permissions for the same data; 4) roles are reused for multiple organizations/regions/modules within the same application (or platform for the platform-level roles); 5) a user at the same time represents multiple organizations/regions with independent multiple roles within the same application or platform; 6) a user has multiple roles within the same organization/region/module within an application or platform; 7) the entitlements for a user are combined using union of all the entitlements while maintaining the application and organizational/regional boundaries; 8) a user has different roles (independently) in each entitlement (e.g., organization or region) the user is representing; 9) the organizations are divided into further smaller sections using attributes (e.g., geographic regions) and/or hierarchal structures (e.g., division/departments); 10) the roles are at the platform level and application specific; 11) the roles are module specific within the application; and 12) a user is assigned different roles across entitlements. If a user has roles that have conflicting access to a data domain, the user can receive access to all or individual data domains. The access may provide the union of the permission by each data-domain and action, while maintaining organization, region boundaries, and granularity.) and utilizing an identity and access manager (IAM) service (mapping above + [0042] The entitlement service 622 may derive datasets with entitlement contracts based on the querying with the entitlement filters. The entitlement filters may determine the operation and data access (e.g., read only, administrative privileges, etc.) the user can perform. For example, if the user is logged in with read-only permissions on a derived dataset, the entitlement filter determines the user can access, but not update or delete the data.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [using identity information to control access rights to data] as is taught by [Gupta]. The suggestion/motivation for doing so is to [control data access [0001-0005]]. Regarding claim 8, Dubynskiy in view of Garner teach the method of claim 2, and is disclosed above, Dubynskiy in view of Garner do not explicitly teach further comprising: storing schema of telemetry data generated as metric datasets by telemetry producers in the network in a telemetry catalog; receiving, by a telemetry transmitter component, a list of consumers selected to receive metric dataset from a producer; first validating the consumers to allow them receive to the metric dataset; transmitting a validation of the consumer to a telemetry library of the producer; and sending the metric dataset from the producer to the consumers as part of the telemetry data. In an analogous art Gupta teaches storing schema of telemetry data generated as metric datasets by telemetry producers in the network in a telemetry catalog; (0017; enforcement on multiple types of data APIs and storage (e.g., database) technologies. In some cases, the service registry and data schema catalog serve as a bridge between enforcement on multiple types of data APIs and storage technologies. [0022] In some implementations, within a data-domain and action pair, granular control (e.g., based on the row and column level filters) is put on for each role independently (e.g., for each data-domain and action pair separately) using a column/attribute value (e.g., separate policies for read, create, update, and delete for the asset core data for each role independently, and additionally for the telemetry data, workorder data, and user data). In some cases, the granular control and scalability are multiple levels.; In some cases, the entitlement service 622 may leverage the service registry and dataset schema catalog to provide expandability and scalability while minimizing the changes in the entitlement service code (e.g., when a new dataset and/or data API/service is added to the platform). Applications 602 can connect to API gateway 604. Applications 602 and API gateway 604 can access a global directory 614. API gateway 604 can transmit a token to the entitlement service 622. The entitlement service 622 can include entitlement retrieval and management APIs 612 (e.g., connected to database 618 and/or database 620 with policy information, role information, service registry, and/or derived dataset(s)). [0041] In some implementations, the entitlement service 622 sends entitlement filters to the API gateway 604 to provide querying with the entitlement filters. The API gateway 604 can send the entitlement filters to the data APIs 606, which retrieves results from derived datasets 616. The data APIs 606 can use a derived dataset(s) schema catalog to interpret entitlement filters.) receiving, (0030; 0039; receiving client requests) by a telemetry transmitter component, a list of consumers (entitlement user table) selected to receive metric dataset from a producer (requesting access to derived dataset); (0041-0043; access restriction to derived dataset based on access rights and privileges; 0046-0047; [0044] FIG. 7 is a conceptual diagram illustrating an example 700 of an entitlement service deployment. In some implementations, administrators use the entitlement management APIs 706 to retrieve/manage/create/update other users and roles. Applications 702 can connect to API gateway 704. API gateway 704 can connect to entitlement management APIs 706 and entitlement APIs 708. The entitlement management APIs 706 can enforce logged-in admin's data entitlements for users via the entitlement APIs 708 (e.g., authorization, account selection, etc.). The entitlement management APIs 706 can access a directory 710, an entitlement user table 712, and an entitlement role table 714 for data.; 0035; In some cases, the entitlement service module 346 includes the entitlement management module 348 and the entitlement retrieval module 350. Enforcement can include comparing values, permissions, entitlements, rights to access, rights to user, or authentication credentials users attempting to access the information;) first validating the consumers to allow them receive to the metric dataset; (mapping above + [0039] For example, a platform clients module 504 (e.g., mobile application, clients, tablet applications, web application, etc.) can request a token from authentication module 506. Authentication module 506 can request and receive a validated token from API gateway 502. The platform clients module 504 can receive the token from the authentication module 506 and make a request to the API gateway 502 with the token. API gateway 502 can route the request to the appropriate API in the entitlement enforcement module 508. The data API 510 can retrieve data based on filters from the application data store 512. The data API 510 can send a response to the API gateway 502. API gateway 502 can cache the response. API gateway 502 can request entitlements from the entitlements API 514. The entitlements API 514 can retrieve entitlements based on rules and policies from the data stores 518. The API gateway 502 can communicate with the entitlements maintenance API 516 to maintain entitlements. The entitlements maintenance API 516 and the data stores 518 can read/update rules and policies. In some implementations, the entitlements maintenance API 516 and entitlements API 514 are part of the entitlement service 520. The entitlements maintenance API 516 can filter criteria for the request and send the filtered criteria response to the API gateway 502. The API gateway 502 can return the response to the client at the platform clients 504.) transmitting a validation (validation token) of the consumer (requesting client user) to a telemetry library (dataset) of the producer; (mapping above + [0039] For example, a platform clients module 504 (e.g., mobile application, clients, tablet applications, web application, etc.) can request a token from authentication module 506. Authentication module 506 can request and receive a validated token from API gateway 502. The platform clients module 504 can receive the token from the authentication module 506 and make a request to the API gateway 502 with the token. API gateway 502 can route the request to the appropriate API in the entitlement enforcement module 508. The data API 510 can retrieve data based on filters from the application data store 512. The data API 510 can send a response to the API gateway 502. API gateway 502 can cache the response. API gateway 502 can request entitlements from the entitlements API 514. The entitlements API 514 can retrieve entitlements based on rules and policies from the data stores 518. The API gateway 502 can communicate with the entitlements maintenance API 516 to maintain entitlements. The entitlements maintenance API 516 and the data stores 518 can read/update rules and policies. In some implementations, the entitlements maintenance API 516 and entitlements API 514 are part of the entitlement service 520. The entitlements maintenance API 516 can filter criteria for the request and send the filtered criteria response to the API gateway 502. The API gateway 502 can return the response to the client at the platform clients 504; see also 0040-0047) and sending the metric dataset from the producer to the consumers as part of the telemetry data. (mapping above; providing a response which is a dataset based on entitlement access rights, and the validation token) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [validating data recipients and using a data schema for handling data] as is taught by [Gupta]. The suggestion/motivation for doing so is to [control data access [0001-0005]]. Regarding claim 16, the claim inherits the same rejection as claims 5 above for reciting similar limitations in the form of a system claim (Col 1 Lines 38-50; system) Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dubynskiy et al. (US 11516308 B1) in view of Garner, IV et al. (US 10547498 B1) hereinafter Garner, in view of Gupta et al. (US 20220164468 A1), in view of Ko (US 20200244740 A1) Regarding claim 6, Dubynskiy in view of Garner teach the method of claim 2 and is disclosed above, Dubynskiy in view of Garner does not explicitly teach further comprising: storing schema of telemetry data generated as metric datasets by telemetry producers in the network in a telemetry catalog; receiving, by a telemetry transmitter component, a list of other producers in addition to an original producer of a metric dataset from the original producer; first validating the other producers to allow them to store and transmit the metric dataset; transmitting a validation to a telemetry library of the original and other producers; and accepting the metric dataset from the original and other producers for storage and transmission to telemetry consumers. In an analogous art Gupta teaches further comprising: storing schema of telemetry data generated as metric datasets by telemetry producers in the network in a telemetry catalog; (0017; enforcement on multiple types of data APIs and storage (e.g., database) technologies. In some cases, the service registry and data schema catalog serve as a bridge between enforcement on multiple types of data APIs and storage technologies. [0022] In some implementations, within a data-domain and action pair, granular control (e.g., based on the row and column level filters) is put on for each role independently (e.g., for each data-domain and action pair separately) using a column/attribute value (e.g., separate policies for read, create, update, and delete for the asset core data for each role independently, and additionally for the telemetry data, workorder data, and user data). In some cases, the granular control and scalability are multiple levels.; In some cases, the entitlement service 622 may leverage the service registry and dataset schema catalog to provide expandability and scalability while minimizing the changes in the entitlement service code (e.g., when a new dataset and/or data API/service is added to the platform). Applications 602 can connect to API gateway 604. Applications 602 and API gateway 604 can access a global directory 614. API gateway 604 can transmit a token to the entitlement service 622. The entitlement service 622 can include entitlement retrieval and management APIs 612 (e.g., connected to database 618 and/or database 620 with policy information, role information, service registry, and/or derived dataset(s)). [0041] In some implementations, the entitlement service 622 sends entitlement filters to the API gateway 604 to provide querying with the entitlement filters. The API gateway 604 can send the entitlement filters to the data APIs 606, which retrieves results from derived datasets 616. The data APIs 606 can use a derived dataset(s) schema catalog to interpret entitlement filters.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [storing a schema in a catalog used for data operations] as is taught by [Gupta]. The suggestion/motivation for doing so is to [control data access [0001-0005]]. Dubynskiy in view of Garner in view of Gupta do not explicitly teach further comprising: receiving, by a telemetry transmitter component, a list of other producers in addition to an original producer of a metric dataset from the original producer; first validating the other producers to allow them to store and transmit the metric dataset; transmitting a validation to a telemetry library of the original and other producers; and accepting the metric dataset from the original and other producers for storage and transmission to telemetry consumers. In an analogous art Ko teaches receiving, by a telemetry transmitter component, a list of other producers in addition to an original producer of a metric dataset from the original producer; (The system discovers and establishes communication as well as authenticates the IoT devices producing the data; [0032] The wireless access device is adapted to search for the IoT devices 110 present in the communication coverage according to the corresponding short-range wireless communication standard, establish a connection with the discovered IoT devices 110, and then perform a connection setup and data transmission and reception with the IoT platform server 130. [0078] The registration unit 630 may perform registration and authentication of IoT devices 110, and register and manage specifications of common properties or features of common feature IoT devices 110.) first validating the other producers to allow them to store and transmit the metric dataset; [0032] The wireless access device is adapted to search for the IoT devices 110 present in the communication coverage according to the corresponding short-range wireless communication standard, establish a connection with the discovered IoT devices 110, and then perform a connection setup and data transmission and reception with the IoT platform server 130. [0078] The registration unit 630 may perform registration and authentication of IoT devices 110, and register and manage specifications of common properties or features of common feature IoT devices 110. [0035] The IoT platform server 130 is a platform apparatus for providing an IoT-based service. The IoT platform server 130 provides functions to register and authenticate the IoT device 110, collect and manage data of the IoT device 110, and control the IoT device 110 among other functions. Developers of IoT services may utilize such functions provided by the IoT platform server 130 to develop various IoT-based services. In addition, by way of the IoT platform server 130, the user may manage and monitor the connection and operation control of the IoT device 110 for the Internet of Things services. [0034] The IoT device 110 is a device having a sensor function and a communication function and connected to a communication network to transmit and receive data. The IoT device 110 may be implemented as various embedded systems such as home appliances, mobile equipment, and wearable computers. The IoT device 110 connects to the IoT platform server 130 through a communication network and transmits data such as status information or sensing information to the IoT platform server 130 to perform a predetermined function at the control command from the IoT platform server 130. In this case, the IoT device 110 may access a wireless access device (not shown) that performs short-range wireless communication to establish a connection and then access the IoT platform server 130 through the wireless access device.) transmitting a validation to a telemetry library of the original and other producers; ([0046] For the purpose of security, the control unit 320 checks the authority of the IoT device 110 and the application server 150 for the authentication and request, and according to the confirmed authority, the control unit 320 processes requests or data sent from the IoT device 110 and the application server 150. [0035] The IoT platform server 130 is a platform apparatus for providing an IoT-based service. The IoT platform server 130 provides functions to register and authenticate the IoT device 110, collect and manage data of the IoT device 110, and control the IoT device 110 among other functions. Developers of IoT services may utilize such functions provided by the IoT platform server 130 to develop various IoT-based services. In addition, by way of the IoT platform server 130, the user may manage and monitor the connection and operation control of the IoT device 110 for the Internet of Things services.) and accepting the metric dataset from the original and other producers for storage and transmission to telemetry consumers. (mapping above + [0039] The storage unit 210 stores programs and data necessary for the operation of the IoT device 110. The storage unit 210 may store data such as sensing information or state information collected by the IoT device 110 at predetermined intervals. In addition, the storage unit 210 may store a program for performing processing in the IoT device 110, such as the collection and transmission of sensing information or state information.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner in view of Gupta] to include [validate iot devices producing data, and allowing the system to accept data based on validation] as is taught by [Ko]. The suggestion/motivation for doing so is to [improve functionality or Iot platforms [0001-0004].] Claim(s) 7 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dubynskiy et al. (US 11516308 B1) in view of Garner, IV et al. (US 10547498 B1) hereinafter Garner, in view of Bhatia et al. (US 20220058069 A1), and further in view of Paez et al. (US 20230095852 A1) Regarding claim 7 Dubynskiy in view of Garner teach the method of claim 2 and is disclosed above, Dubynskiy in view of Garner do not explicitly teach further comprising: storing the telemetry data as generated by telemetry producers in the network in a datastore as metric datasets having a defined schema; receiving a new metric schema for a new telemetry metric generated by a producer; validating the new metric schema in a schema validator component of a telemetry transmitter; storing the validated new metric schema in a telemetry catalog; and storing a new telemetry metric dataset for the new telemetry metric in the datastore for transmission to appropriate telemetry consumers. In an analogous art Bhatia teaches storing the telemetry data as generated by telemetry producers (0030 iot data and measurements) in the network in a datastore (databse) as metric datasets having a defined schema (iot data storage schema); (Example Database Schemas for Storing IOT Data [0088] The present disclosure provides a schema definition process that can flexibly allow new tables to be added when new types of aggregates are to be stored in a database (e.g., the database 386 of FIG. 3), and for table formats to be modified when changes to aggregate definition (e.g., for an indicator group) are received. As will be further described, table schemas can be defined in terms of indicator groups. Thus, since aggregates can be defined based on indicator groups, the definition of a table schema can correlate to the definition used for an aggregate to be stored in a table according to the table schema [0012] FIG. 5 illustrates various table schemas that may be used to store data from IOT devices, or produced therefrom; [0025] For many applications then, two processes are needed to enable analytical queries or other data processing. First, IOT data from IOT devices needs to be ingested and converted to a format where it can be combined with master data or other structured data. Second, such master data or other structured data needs to be obtained. These processes can be complicated for a number of reasons, including when a system that enables analytical applications for IOT data is useable with a variety of IOT devices/IOT platforms, may be used by a number of different entities (e.g., tenants in a multitenant database or other multitenant application architecture), and may be associated with master data or other descriptive data that may come from a number of different data sources, which may have different data models or schemas, and where the original data models or schemas are not optimized for analytical applications.) receiving a new metric schema (change to aggregate definition of a schema) for a new telemetry metric generated by a producer; (0037-0038; updating indicator groups[0088] The present disclosure provides a schema definition process that can flexibly allow new tables to be added when new types of aggregates are to be stored in a database (e.g., the database 386 of FIG. 3), and for table formats to be modified when changes to aggregate definition (e.g., for an indicator group) are received. As will be further described, table schemas can be defined in terms of indicator groups. Thus, since aggregates can be defined based on indicator groups, the definition of a table schema can correlate to the definition used for an aggregate to be stored in a table according to the table schema. [0089] Having table schema definitions correlate with aggregate definitions can have a number of advantages. One advantage is that a table schema can be more easily adapted when a change to an aggregate definition (e.g., an indicator group associated with the definition, such as adding sensors to an indictor group or removing sensors from an indicator group) is made. Since table definitions are consistent with aggregate definitions, new tables can be more easily created when new aggregates (e.g., for new indicator groups) are created. [0099] In some cases, if a table is altered to include a new column, the table only includes data for newly written data. That is, for example, if a new column is added for a table having the schema 550, data values are not added to the table for existing records in the table. In other cases, NULL values can be added to existing table records. In yet further cases, at least if the relevant data is available, such as in a hyperscale computing system, a database writer or other component can send aggregation requests or data requests to the hyperscale computing system. If the relevant aggregate was already calculated, it can be retrieved by the database writer and written to the table. If the relevant aggregate was not calculated, and the individual sensor data is available, the aggregate value can be calculated and provided to a database writer. For example, once the aggregation has been completed, a message that a new aggregate is available can be placed in a queue for retrieval and processing by a database writer, as described in Example 3.) and storing a new telemetry metric dataset for the new telemetry metric in the datastore for transmission to appropriate telemetry consumers. (mapping above + [0050] In at least some implementations, data from the IOT devices 102 is sent to an IOT or cloud service 112. The IOT or cloud service 112 is typically configured to receive and store data from IOT devices. The IOT or cloud service 112 can send IOT data to a streaming service 114. The streaming service 114 can be in a published-subscriber relationship with the IOT or cloud service 112. The streaming service 114 can include software such as KAFKA, available from the Apache Software Foundation (Forest Hill, Md.). Data can be stored in the streaming service 114 in one or more containers 116 (which can be containers configured to store timeseries data). Typically, the containers 116 are associated with a particular topic, and data from particular IOT devices 102 (or sensors thereof) is routed to the appropriate container. [0095] On the other hand, by providing different columns for each sensor, data for each sensor in an indicator group can be stored in its native datatype in tables having the schema 550, making the data more useable. In addition, data maintained in the schema 550 can be more highly compressible, since a given column 554, 556, 558 may be expected to have a smaller domain of values/more frequently repeating values. This compressibility can be particularly useful when data is stored in a column-store format (e.g., data is stored by column for multiple records, rather than storing all columns for a single record together). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [receiving a schema change for iot data and storing new data according to the schema] as is taught by [Bhatia]. The suggestion/motivation for doing so is to [improve iot data analytics [0002]] Dubynskiy in view of Garner in view of Bhatia do not explicitly teach validating the new metric schema in a schema validator component of a telemetry transmitter; storing the validated new metric schema in a telemetry catalog; In an analogous art Paez teaches validating the new metric schema in a schema validator component of a telemetry transmitter; ([0067] FIG. 8 is a high-level flowchart illustrating various methods and techniques that implement registering a version of a schema for translation, according to some embodiments. As indicated at 810, a request may be received to register a version of a schema applicable to a data object, in some embodiments. For example, an interface of a data management system, like data management system 110 or 234, may support registration requests (e.g., via API, graphical user interface, command line interface, and so on), in order to provide a new version of a schema (e.g., specified as a JSON or other script, programming code, or language) with an associated data object (e.g., a document or event stream). In some embodiments, the schema may include or link to instructions (e.g., scripts, programming code, or language) for translating between the version of the schema being registered and one or more prior versions of the schema (e.g., describing what data field was added, changed, removed, etc.). [0068] The registration request may be rejected as indicated at 812, in some embodiments, if the request or updated schema fails a validation technique, such as analysis indicating that the version of the schema fails to conform to various stylistic or other constraints on schemas (e.g., using invalid data types). As indicted by the negative exit from 812, a response indicating that the registered version of the schema is invalid may be returned. [0069] As indicated at 820, the version of the schema may be added to a registry for schemas for data objects, in some embodiments. For example, a database or other data storage system may store a schema as a document, file, or other object. A link, mapping, or other association may be updated to identify which data object(s) (e.g., event stream or document) the schema is applicable to (e.g., a version number, a schema identifier and data object identifiers).) storing the validated new metric schema in a telemetry catalog; ([0069] As indicated at 820, the version of the schema may be added to a registry for schemas for data objects, in some embodiments. For example, a database or other data storage system may store a schema as a document, file, or other object. A link, mapping, or other association may be updated to identify which data object(s) (e.g., event stream or document) the schema is applicable to (e.g., a version number, a schema identifier and data object identifiers).) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner in view of Bhatia] to include [validating and registering a new database schema] as is taught by [Paez]. The suggestion/motivation for doing so is to [improve data exchange in a network [0001].] Regarding claim 19, the claim inherits the same rejection as claim 7 above for reciting similar limitations in the form of a system claim (Col 1 Lines 38-50 system) Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dubynskiy et al. (US 11516308 B1) in view of Garner, IV et al. (US 10547498 B1) hereinafter Garner, and further in view of Tayeb et al (US 20220417117 A1) Regarding claim 9, Dubynskiy in view of Garner teach the method of claim 2, and is disclosed above, Dubynskiy further teaches further comprising: storing the telemetry data as generated by telemetry producers in the network in a datastore as metric datasets having a defined schema (format); (Col 6 Line 62 [Wingdings font/0xE0] Col 7 Line 4; The data ingestion layer 205 may store the received telemetry data in a telemetry database 305. The data ingestion layer 205 may store the telemetry data in a format received by the telemetry processing service 110 or may be configured to reformat the telemetry data to another format for storage in the telemetry database 305. The data ingestion layer 205 may be configured to reformat the various types of telemetry data received by the telemetry processing service 110 to a standard format to facilitate processing and analysis of the telemetry data.) Dubynskiy does not explicitly teach but Garner teaches defining a short-term metric as a metric to be collected for a short duration;(Col 5 Line 64 [Wingdings font/0xE0] Col 6 Line 12; device information includes, for example, device type, performance measures or alert factors, measurement ranges for performance measures or alert factors (e.g., comprising a lower bound value and an upper bound value for, for 5 example, heart rate or temperature, monitoring time period information, data measuring tolerances, etc.), device component information, and alert configuration information Dubynskiy in view of Garner do not explicitly teach registering the short-term metric with a metric schema, defined duration, and condition for triggering collection of the metric; initiating, upon detection of the condition, collection of the short-term metric; and stopping collection of the metric at the end of the defined duration. In an analogous art Tayeb teaches registering the short-term metric with a metric schema (telemetry configuration), defined duration (configurable time period), and condition for triggering collection of the metric (triggering condition for collection) ([0031] FIGS. 2-5 show example operation of the TRMAP. Each of FIGS. 2-5 includes a system 200 that includes multiple collection agents 210a, 210b, 210c, and 210r (collectively referred to as “agents 210” or “agent 210”). The collection agents 210 (also referred to as “access agents 210”, “telemetry agents 210”, or the like) may be the same or similar as the collector 110 of FIG. 1, or may be a combination of the collector 110 and the monitoring function 120 of FIG. 1. The agents 210 include respective collection configurations 211a, 211b, 211c, 211r (collectively referred to as “collection configurations 211” “collection configuration 211”, and/or the like) that define or otherwise specify various data/metrics collection parameters and/or characteristics, and in some implementations, may specify the data/metrics that an agent 210 is willing to share with other agents 210 and/or the data/metrics of one or more data sources from which an agent 210 is willing to share. As examples, a collection configuration 211 (also referred to as a “telemetry configuration 211”, “manifest 211”, or the like) can specify one or more of a set of data types and/or metrics to be collected, a set of data sources from which to collect or otherwise access the data/metrics; a set of time periods or intervals when data collection is to take place (or time periods/intervals when data collection process(es) are triggered or initiated); triggers, conditions, and/or events for starting or initiating data collection process(es); telemetry system information and/or telemetry pipeline information; sharing permissions; local and/or remote data storage locations for storing collected data; and/or other like parameters and/or data. The manifests 211 and/or the various messages (e.g., messages 204, 311, 411, and the like) may be embodied as any suitable information object and/or with any suitable data format or data structure such as any of those discussed herein) initiating, upon detection of the condition, collection of the short-term metric; ([0031] FIGS. 2-5 show example operation of the TRMAP. Each of FIGS. 2-5 includes a system 200 that includes multiple collection agents 210a, 210b, 210c, and 210r (collectively referred to as “agents 210” or “agent 210”). The collection agents 210 (also referred to as “access agents 210”, “telemetry agents 210”, or the like) may be the same or similar as the collector 110 of FIG. 1, or may be a combination of the collector 110 and the monitoring function 120 of FIG. 1. The agents 210 include respective collection configurations 211a, 211b, 211c, 211r (collectively referred to as “collection configurations 211” “collection configuration 211”, and/or the like) that define or otherwise specify various data/metrics collection parameters and/or characteristics, and in some implementations, may specify the data/metrics that an agent 210 is willing to share with other agents 210 and/or the data/metrics of one or more data sources from which an agent 210 is willing to share. As examples, a collection configuration 211 (also referred to as a “telemetry configuration 211”, “manifest 211”, or the like) can specify one or more of a set of data types and/or metrics to be collected, a set of data sources from which to collect or otherwise access the data/metrics; a set of time periods or intervals when data collection is to take place (or time periods/intervals when data collection process(es) are triggered or initiated); triggers, conditions, and/or events for starting or initiating data collection process(es); telemetry system information and/or telemetry pipeline information; sharing permissions; local and/or remote data storage locations for storing collected data; and/or other like parameters and/or data.) and stopping collection of the metric at the end of the defined duration. ([0031] FIGS. 2-5 show example operation of the TRMAP. Each of FIGS. 2-5 includes a system 200 that includes multiple collection agents 210a, 210b, 210c, and 210r (collectively referred to as “agents 210” or “agent 210”). The collection agents 210 (also referred to as “access agents 210”, “telemetry agents 210”, or the like) may be the same or similar as the collector 110 of FIG. 1, or may be a combination of the collector 110 and the monitoring function 120 of FIG. 1. The agents 210 include respective collection configurations 211a, 211b, 211c, 211r (collectively referred to as “collection configurations 211” “collection configuration 211”, and/or the like) that define or otherwise specify various data/metrics collection parameters and/or characteristics, and in some implementations, may specify the data/metrics that an agent 210 is willing to share with other agents 210 and/or the data/metrics of one or more data sources from which an agent 210 is willing to share. As examples, a collection configuration 211 (also referred to as a “telemetry configuration 211”, “manifest 211”, or the like) can specify one or more of a set of data types and/or metrics to be collected, a set of data sources from which to collect or otherwise access the data/metrics; a set of time periods or intervals when data collection is to take place (or time periods/intervals when data collection process(es) are triggered or initiated); triggers, conditions, and/or events for starting or initiating data collection process(es); telemetry system information and/or telemetry pipeline information; sharing permissions; local and/or remote data storage locations for storing collected data; and/or other like parameters and/or data.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [registering data schema, definitions, and condition and collecting data during the period] as is taught by [Tayeb]. The suggestion/motivation for doing so is to [improve telemetry collection and analysis of remote points [0001-0002]] Claim(s) 10, 12, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dubynskiy et al. (US 11516308 B1) in view of Garner, IV et al. (US 10547498 B1) hereinafter Garner, and further in view of Chao et al. (US 20250202973 A1) Regarding claim 10, Dubynskiy in view of Garner teach the method of claim 2 and is disclosed above, Dubynskiy in view of Garner Do not explicitly teach wherein the cluster network comprises a Kubernetes-based cluster network having a plurality of nodes and pods by processing golden signal telemetry data, the method comprising: first registering, with a telemetry transmitter, a golden signal comprising telemetry data related to one of traffic, latency, errors, and saturation in storage system of the network by naming a registering pod, a probe name, an endpoint, and a threshold value; second registering, with a self-healing service, a probe for the golden signal with the telemetry transmitter; monitoring, by the self-healing service, an activity related to the golden signal in comparison with a respective threshold value; and calling the registered probe when an activity value exceeds the respective threshold value as an indication of a problem condition. In an analogous art Chao teaches wherein the cluster network comprises a Kubernetes-based cluster network having a plurality of nodes and pods (0058; A Kubernetes deployment is referred to as a “cluster,” which consists of multiple nodes that run containers. Each node can be a physical or virtual machine and runs pods-groups of one or more containers. ) by processing golden signal telemetry data, ([0060] Kubernetes architecture consists of two main components: the control plane and nodes. The control plane is responsible for managing the state of the cluster. It includes components like the API server, scheduler, and controller manager. Nodes are the machines that run the applications. Each node has a “kubelet,” which communicates with the control plane to manage the lifecycle of containers on that node. [0061] The kubelet is service that ensures containers are running in a pod as expected. The kube-scheduler assigns pods to nodes based on resource availability. The kube-proxy manages network routing for services within the cluster. [0231] In an example, a tracer may export event traces to an application performance management platform (e.g., Honeycomb). In an example, the tracer may be implemented as a wrapper around an observability framework such as an Open Telemetry client.) the method comprising: first registering, with a telemetry transmitter, a golden signal comprising telemetry data related to one of traffic, ([0059] Kubernetes continuously monitors the health of applications and automatically restarts or replaces containers that fail or become unresponsive. It provides built-in solutions for load balancing traffic across containers and discovering services within the cluster. 0247; Kubernetes supports gRPC health checking, allowing users to define liveness and readiness probes for gRPC servers running in pods. This ensures that services are healthy and ready to handle requests. Newer versions of ingress controllers, like Kong Ingress Controller 2.9, support exposing gRPC services using the Gateway API, allowing for streamlined management of gRPC traffic within Kubernetes clusters. [0173] In the particular polygraph shown in FIG. 2E, nodes generally correspond to workers, and edges correspond to communications the workers engage in (with connection activity being the behavior modeled in polygraph 235). Another example polygraph could model other behavior, such as application launching. The communications graphed in FIG. 2E include traffic entering the datacenter, traffic exiting the datacenter, and traffic that stays wholly within the datacenter (e.g., traffic between workers). ) latency, (Examiner notes this is an “at least on of” limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) errors, (Examiner notes this is an “at least on of” limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) and saturation in storage system of the network by naming a registering pod, a probe name, an endpoint, and a threshold value; (Examiner notes this is an “at least on of” limitation, requiring only one element be selected for mapping, examiner notes this element is not being selected) second registering, with a self-healing service (Kubernetes health checking probes and buil-in health solutions), a probe for the golden signal with the telemetry transmitter; ([0059] Kubernetes continuously monitors the health of applications and automatically restarts or replaces containers that fail or become unresponsive. It provides built-in solutions for load balancing traffic across containers and discovering services within the cluster. [0247] Kubernetes allows the creation of custom resources using custom resource definitions (CRDs), which can be used to define specific configurations for gRPC services. Kubernetes supports gRPC health checking, allowing users to define liveness and readiness probes for gRPC servers running in pods. This ensures that services are healthy and ready to handle requests. 0192; The service etcd2 can be used by microservice instances to discover how many peer instances are running and used for calculating a hash-based scheme for workload distribution. Microservices may be configured to publish various health/status metrics to either an SQS queue, or etcd2, as applicable. In some examples, Amazon DynamoDB can be used for state management.) monitoring, by the self-healing service.( [0059] Kubernetes continuously monitors the health of applications and automatically restarts or replaces containers that fail or become unresponsive. It provides built-in solutions for load balancing traffic across containers and discovering services within the cluster.) an activity related to the golden signal (traffic see mapping above) in comparison with a respective threshold value; (Mapping above + [0206] Alert generator 184 is a microservice that may be responsible for generating alerts. Alert generator 158 may examine observations (e.g., produced by GBM 168) in aggregate, deduplicate them, and score them. Alerts may be generated for observations with a score exceeding a threshold. [0070] In some embodiments, the job controller may be configured to dynamically determine whether to run a job as an ephemeral or persistent job. The determination may be made based on any suitable criteria, such as attributes of jobs (e.g., job type, latency requirement, frequency of usc, etc.) and/or as thresholds associated with resource usage and/or state of the managed service and/or jobs being processed by the managed service.) and calling the registered probe(health checking probes) when an activity value exceeds the respective threshold value (observed traffic threshold) as an indication of a problem condition.(health condition) (Mapping above + [0247] Kubernetes allows the creation of custom resources using custom resource definitions (CRDs), which can be used to define specific configurations for gRPC services. Kubernetes supports gRPC health checking, allowing users to define liveness and readiness probes for gRPC servers running in pods. This ensures that services are healthy and ready to handle requests. Newer versions of ingress controllers, like Kong Ingress Controller 2.9, support exposing gRPC services using the Gateway API, allowing for streamlined management of gRPC traffic within Kubernetes clusters. Overall, integrating gRPC with Kubernetes enhances the performance and scalability of micro-services by leveraging Kubernetes' orchestration capabilities along with gRPC's efficient communication protocols) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [identifying telemetry traffic data, applying a solution automatically based on monitoring and comparing data] as is taught by [Chao]. The suggestion/motivation for doing so is to [improve monitoring compute environments [0002]] Regarding claim 12, Dubynskiy in view of Garner teach the method of claim 2 and is disclosed above, Dubynskiy in view of Garner do not explicitly teach wherein the cluster network implements an Open Telemetry (OTEL) protocol, and comprises a collector receiving the telemetry data through a remote procedure call (RPC) process, and further wherein cluster network includes nodes each containing a plurality of pods performing network functions and generating the telemetry data for transmission to the users. In an analogous art Chao teaches wherein the cluster network implements an Open Telemetry (OTEL) protocol, ([0060] Kubernetes architecture consists of two main components: the control plane and nodes. The control plane is responsible for managing the state of the cluster. It includes components like the API server, scheduler, and controller manager. Nodes are the machines that run the applications. Each node has a “kubelet,” which communicates with the control plane to manage the lifecycle of containers on that node. [0061] The kubelet is service that ensures containers are running in a pod as expected. The kube-scheduler assigns pods to nodes based on resource availability. The kube-proxy manages network routing for services within the cluster. [0231] In an example, a tracer may export event traces to an application performance management platform (e.g., Honeycomb). In an example, the tracer may be implemented as a wrapper around an observability framework such as an Open Telemetry client.) and comprises a collector receiving the telemetry data through a remote procedure call (RPC) process, ([0245] In an example, ephemeral job controller 702 exposes more ergonomic APIs (e.g., via gRPC interface 704) for creating and querying the state of jobs. Ephemeral job controller 702 may also impose validations on job specifications and mutate jobs to add important metadata properties (for tracking and performance purposes) via request validation module 706, for example. [0246] In Kubernetes gRPC refers to the integration and use of the gRPC framework within Kubernetes environments to facilitate high-performance, remote procedure calls between services. In general, gRPC is a high-performance, open-source RPC (Remote Procedure Call) framework developed by Google. It is designed to make it easier for developers to build distributed systems and micro-services by abstracting the complexities of network communication. It supports multiple programming languages and uses Protocol Buffers as its interface definition language, which allows for efficient serialization and deserialization of data.) and further wherein cluster (mapping above) network includes nodes each containing a plurality of pods (0058; A Kubernetes deployment is referred to as a “cluster,” which consists of multiple nodes that run containers. Each node can be a physical or virtual machine and runs pods-groups of one or more containers. ) performing network functions (0192; microservices) and generating the telemetry data (160-168; data related to the process)([0051] In some embodiments, a data platform system may be configured to monitor various cloud environments provided by various cloud services providers, including but not limited to Amazon Web Services (AWS), Google Cloud Platform (GCP), Azure available from Microsoft, and Oracle Cloud Infrastructure (OCI) cloud environments. For example, the data platform system may provide resource management and/or cloud security posture management for one or more such cloud environments. Such monitoring of a cloud environment may include collecting data from the cloud environment, such as configuration data (e.g., resource configuration metadata), using the data to identify vulnerabilities and/or compliance statuses of resources (e.g., software components such as software applications, libraries, etc.) in the cloud environment, and performing one or more operations based on the identified vulnerabilities and/or compliance statuses (e.g., providing output, generating alerts, remediating, etc.). [0052] In some embodiments, the data platform system may be deployed to and run in a compute environment (e.g., a cloud environment such as AWS, GCP, Azure, OCI, etc.) where operations/workloads performed by the data platform system are executed by computing resources of the compute environment. At least some such operations/workloads may be in the form of jobs that are executed by computing resources of the compute environment. Jobs may be executed as part of any aspect of the data platform system, such as any service provided by the data platform system. For example, the data platform system may run jobs as part of generating a logical graph model based on data collected from a compute environment being monitored by the data platform system, such as jobs that determine and/or check edge dependencies of nodes of the logical graph (e.g., connection matching, edge matching, etc.) and/or any other jobs associated with generating a logical graph. for transmission to the users. (Fig 2F-2N; 0161-0162; Fig 2A 205; [0091] User interface resources 120 may be configured to perform one or more user interface operations, examples of which are described herein. For example, user interface resources 120 may be configured to present one or more results of the data processing performed by data processing resources 114 to one or more external entities (e.g., computing device 118 and/or one or more users), as illustrated by arrow 128. As illustrated by arrow 128, user interface resources 120 may access data in data store 122 to perform the one or more user interface operations. It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [using OTEL and RPC for managing computing nodes] as is taught by [Chao]. The suggestion/motivation for doing so is to [improve monitoring compute environments [0002]] Regarding claim 13, Dubynskiy in view of Garner in view of Chao teach the method of claim 12, and is disclosed above, Dubynskiy does not explicitly teach but Garner teaches and generating the telemetry data for transmission to the subscribing consumers. (Col 5 Lines 28-44; A command to enter configuration mode is received at 202. The smart alert system 118 receives the command from the user device 116 via the network 114. The command is initiated by the subscriber 136. The command received at 202 instructs the smart alert system 118 to enter a configuration mode. For example, the command may be received in response to a request from the subscriber 136 to register as a user of the smart alert system 118 or to update an existing subscriber profile associated with the subscriber 136. While in the configuration mode, the smart alert system 118 can be programmed to communicate with IoT devices 112, to interpret data received from IoT devices 112, to create or update subscriber profiles based on subscriber information (e.g., contact information, subscriber characteristics, associated IoT devices 112, etc.), and to create or update alert hierarchy information. In response to the command at 202, the smart alert system 118 enters the configuration mode.) (Col 11 Lines 24-45; For example, the subscriber 136 can program the alert distribution time period to be once an hour, once a day, once a week, and so on. The subscriber 136 can also program different alert distribution time periods for different sets of alerts. In some arrangements, the subscriber 136 can provide the smart alert system 118 access to a calendar of the subscriber 136 (e.g., Outlook® or Google® calendar) such that the smart alert system can automatically determine the optimal batch alert delivery time. In such arrangements, the alert distribution time period may not be a consistent time period but may fluctuate based on the subscriber's schedule. For example, the smart alert system 118 can avoid sending alerts while the subscriber 136 is unable to receive the alerts (e.g., while the subscriber 136 is on a plane, while the subscriber 136 is seeing a movie, while the subscriber 136 is in a business meeting, etc.). During the initial cycle of the method 500, the smart alert system 118 will not determine that the time period has expired at 502. As discussed in further detail below, at a later point in the method 500, the smart alert system 118 may determine that the time period has expired at 502.) Dubynskiy in view of Garner do not explicitly teach but Chao teaches and further wherein the plurality of nodes each contain a plurality of pods (0058; A Kubernetes deployment is referred to as a “cluster,” which consists of multiple nodes that run containers. Each node can be a physical or virtual machine and runs pods-groups of one or more containers.) performing network functions ([0061] The kubelet is service that ensures containers are running in a pod as expected. The kube-scheduler assigns pods to nodes based on resource availability. The kube-proxy manages network routing for services within the cluster. [0062] Kubernetes is used for micro-services management because it simplifies deploying applications composed of multiple micro-services. Kubernetes is also used for cloud migration as it facilitates moving existing applications to cloud environments. Kubernetes is also used for application modernization because it supports containerizing legacy applications to improve performance and scalability.) wherein the cluster network comprises a Santorini network (Examiner notes applicants specification 0003 definition of Santorini network; Chao teaches using a distributed key value stores in 0191; [0191] Returning to FIG. 1D, embodiments of data platform 110 may be built using any suitable infrastructure as a service (IaaS) (e.g., AWS). For example, data platform 110 can use Simple Storage Service (S3) for data storage, Key Management Service (KMS) for managing secrets, Simple Queue Service (SQS) for managing messaging between applications, Simple Email Service (SES) for sending emails, and Route 53 for managing DNS. Other infrastructure tools can also be used. Examples include: orchestration tools (e.g., Kubernetes or Mesos/Marathon), service discovery tools (e.g., Mesos-DNS), service load balancing tools (e.g., marathon-LB), container tools (e.g., Docker or rkt), log/metric tools (e.g., collectd, fluentd, kibana, etc.), big data processing systems (e.g., Spark, Hadoop, AWS Redshift, Snowflake etc.), and distributed key value stores (e.g., Apache Zookeeper or etcd2).further teaching [0136] Container Image Data (e.g., image creation time, parent ID, author, container type, repo, (AWS) tags, size, virtual size, image version). Container Data (e.g., container start time, container type, container name, container ID, network mode, privileged, PID mode, IP addresses, listening ports, volume map, process ID). File path, file data hash, symbolic links, file creation data, file change data, file metadata, file mode.; 0051; For example, the data platform system may provide resource management and/or cloud security posture management for one or more such cloud environments. Such monitoring of a cloud environment may include collecting data from the cloud environment, such as configuration data (e.g., resource configuration metadata), using the data to identify vulnerabilities and/or compliance statuses of resources (e.g., software components such as software applications, libraries, etc.) in the cloud environment, and performing one or more operations based on the identified vulnerabilities and/or compliance statuses (e.g., providing output, generating alerts, remediating, etc.). processing containerized data utilizing a Kubernetes-based framework, ([0255] Returning to FIG. 7, in some embodiments, the data platform system may provide tools (e.g., Prometheus, Honeycomb) for use by job developers to test jobs end-to-end and/or to simulate infrastructure and job failures. In some embodiments, ephemeral job controller 702 may provide or use a rate limiting service, which may be implemented as a distributed semaphore backed by a distributed key-value store such as etcd. Etcd is an open-source distributed key-value store designed to provide strong consistency, high availability, and survivability in distributed systems. It was developed by the CoreOS team and is widely used in cloud environments and container orchestration platforms like Kubernetes. Etcd ensures that data is always up-to-date and consistent across the cluster using the Raft consensus algorithm. The distributed architecture of eted replicates data across multiple nodes, ensuring the system remains operational even if some nodes fail. Etcd can survive network partitions and node failures, maintaining data accessibility and consistency under adverse conditions. Applications can register a watch on specific keys or directories, allowing them to react to changes in values.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Dubynskiy in view of Garner] to include [using nodes with a plurality of pods to perform microservices in a Santorini network and containerized Kubernetes framework ] as is taught by [Chao]. The suggestion/motivation for doing so is to [improve monitoring compute environments [0002]] Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chao et al. (US 20250202973 A1) in view of Gupta et al. (US 20220164468 A1) in view of Ko (US 20200244740 A1). Regarding claim 20, Chao teaches a system for processing telemetry data in a cluster network having a plurality of nodes, comprising: (Fig 1A; system; [0058] Kubernetes automates many manual processes involved in deploying and managing containers. This includes starting new applications, restarting them if they crash, and scaling them based on demand. A Kubernetes deployment is referred to as a “cluster,” which consists of multiple nodes that run containers. Each node can be a physical or virtual machine and runs pods-groups of one or more containers. Users define their desired application state using manifests. Kubernetes then ensures that the current state matches this desired state through its control plane.) a telemetry collector registering producers (Fig 1A compute asset) and consumers (Fig 1A: 0090; computing device of users; 0145; subscription;0151; customer accouunts ) based on defined schema of the telemetry data;( [0085] Data lakes, which store files of data in their native format, may be considered as “schema on read” resources. As such, any application that reads data from the lake may impose its own types and relationships on the data. Data warehouses, on the other hand, are “schema on write,” meaning that data types, indexes, and relationships are imposed on the data as it is stored in an enterprise data warehouse (EDW). “Schema on read” resources may be beneficial for data that may be used in several contexts and poses little risk of losing data. “Schema on write” resources may be beneficial for data that has a specific purpose, and good for data that must relate properly to data from other sources. Such data stores may include data that is encrypted using homomorphic encryption, data encrypted using privacy-preserving encryption, smart contracts, non-fungible tokens, decentralized finance, and other techniques. [0079] Data ingestion resources 112 may be configured to ingest data from cloud environment 102 into data platform 12. This may be performed in various ways, some of which are described in detail herein. For example, as illustrated by arrow 116, data ingestion resources 112 may be configured to receive the data from one or more agents deployed within cloud environment 102, utilize an event streaming platform (e.g., Kafka) to obtain the data, and/or pull data (e.g., configuration data) from cloud environment 102. In some examples, data ingestion resources 112 may obtain the data using one or more agentless configurations)[0254]Honeycomb supports OpenTelemetry for collecting metrics, events, and trace span context from Kubernetes environments. It offers low-code and no-code options for easy integration. a self-healing component detecting generation of telemetry data.( [0059] Kubernetes continuously monitors the health of applications and automatically restarts or replaces containers that fail or become unresponsive. It provides built-in solutions for load balancing traffic across containers and discovering services within the cluster. [0057] Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Originally developed by Google, Kubernetes is commonly used for managing containerized workloads across various environments, including public clouds, private clouds, and on-premises infrastructures.) in excess of defined thresholds of normal operation (traffic load) for one or more defined golden signal metric datasets; (Mapping above + [0206] Alert generator 184 is a microservice that may be responsible for generating alerts. Alert generator 158 may examine observations (e.g., produced by GBM 168) in aggregate, deduplicate them, and score them. Alerts may be generated for observations with a score exceeding a threshold. [0070] In some embodiments, the job controller may be configured to dynamically determine whether to run a job as an ephemeral or persistent job. The determination may be made based on any suitable criteria, such as attributes of jobs (e.g., job type, latency requirement, frequency of usc, etc.) and/or as thresholds associated with resource usage and/or state of the managed service and/or jobs being processed by the managed service. [0206] Alert generator 184 is a microservice that may be responsible for generating alerts. Alert generator 158 may examine observations (e.g., produced by GBM 168) in aggregate, deduplicate them, and score them. Alerts may be generated for observations with a score exceeding a threshold. Alert generator 184 may also compute (or retrieve, as applicable) data that a customer (e.g., user A or user B) might need when reviewing the alert. Examples of events that can be detected by data platform 110 (and alerted on by alert generator 184) include, but are not limited to the following: [0207] new user: This event may be created the first time a user (e.g., of node 116) is first observed by an agent within a datacenter.) an optimizer component optimizing processing of streaming (data ingesting streams of data) time-series (0251- time series data) telemetry data (0254; telemetry metrics) with respect to data storage and network bandwidth usage; ([0079] Data ingestion resources 112 may be configured to ingest data from cloud environment 102 into data platform 12. This may be performed in various ways, some of which are described in detail herein. For example, as illustrated by arrow 116, data ingestion resources 112 may be configured to receive the data from one or more agents deployed within cloud environment 102, utilize an event streaming platform (e.g., Kafka) to obtain the data, and/or pull data (e.g., configuration data) from cloud environment 102. In some examples, data ingestion resources 112 may obtain the data using one or more agentless configurations. [0244] In some embodiments, ephemeral job controller 702 is configured to provide one or more of the following features/functions. In an example, ephemeral job controller 702 monitors jobs using a Kubernetes informer (see above), allowing the data platform system to serve requests for the current state of a job without a round-trip to Kubernetes. In an example, ephemeral job controller 702 reaps completed jobs from Kubernetes, and persists the relevant state information to system database 720. This allows clients (e.g., orchestrator 722, enterprise gateway 724, Spark clients 726, other services 728) to request the state of past jobs, while freeing up the bandwidth for other jobs. In an example, ephemeral job controller 702 applies concurrency limits to the number of jobs created by each client, protecting the Kubernetes backplane.) and a pipeline transmitting the telemetry datasets to validated consumers through selected transport mechanisms.(Mapping above + [0174] In the following examples, suppose that user B, an administrator of a datacenter, is interacting with a data platform to view visualizations of polygraphs in a web browser (e.g., as served to user B via a web app). One type of polygraph user B can view is an application-communication polygraph, which indicates, for a given one-hour window (or any other suitable time interval), which applications communicated with which other applications. Another type of polygraph user B can view is an application launch polygraph. User B can also view graphs related to user behavior, such as an insider behavior graph which tracks user connections (e.g., to internal and external applications, including chains of such behavior), a privilege change graph which tracks how privileges change between processes, and a user login graph, which tracks which (logical) machines a user logs into.[0175] FIG. 2F illustrates an example of a polygraph. FIG. 2F illustrates an example of an application-communication polygraph for a datacenter for the one-hour period of 9 am-10 am on June 5. The time slice currently being viewed is indicated in region 240. If user B clicks his mouse in region 241, user B will be shown a representation of the application-communication polygraph as generated for the following hour (10 am-11 am on June 5).) Chao does not explicitly teach an access control component restricting access to the telemetry data based on role-based access controls (RBAC); a security control component restrict production and use based on compliance regulations regarding security and restricted use of the telemetry data; and a datastore storing a telemetry catalog accessed by the telemetry collector and storing schema definitions for new and existing telemetry data for registering the producers, consumers, and telemetry datasets; In an analogous art Gupta teaches access control component restricting access to the telemetry data based on role-based access controls (RBAC); ([0042] The entitlement service 622 may derive datasets with entitlement contracts based on the querying with the entitlement filters. The entitlement filters may determine the operation and data access (e.g., read only, administrative privileges, etc.) the user can perform. For example, if the user is logged in with read-only permissions on a derived dataset, the entitlement filter determines the user can access, but not update or delete the data. [0016] Various embodiments may provide one or more of the following technological improvements: 1) normalization between the tables provides the benefits of reusability, modularity, while creating the balance with the performance; 2) attributes are tagged at any level in any table providing a hybrid-model between Role-Based Access Control (RBAC) and Role-Based Access Control (ABAC); 3) roles are platform-level or application specific to meet the business needs where the same user coming in from different applications can have different roles/permissions for the same data; 4) roles are reused for multiple organizations/regions/modules within the same application (or platform for the platform-level roles); 5) a user at the same time represents multiple organizations/regions with independent multiple roles within the same application or platform; 6) a user has multiple roles within the same organization/region/module within an application or platform; 7) the entitlements for a user are combined using union of all the entitlements while maintaining the application and organizational/regional boundaries; 8) a user has different roles (independently) in each entitlement (e.g., organization or region) the user is representing; 9) the organizations are divided into further smaller sections using attributes (e.g., geographic regions) and/or hierarchal structures (e.g., division/departments); 10) the roles are at the platform level and application specific; 11) the roles are module specific within the application; and 12) a user is assigned different roles across entitlements. If a user has roles that have conflicting access to a data domain, the user can receive access to all or individual data domains. The access may provide the union of the permission by each data-domain and action, while maintaining organization, region boundaries, and granularity.) and a datastore (storage devices) storing a telemetry catalog (telemetry data) accessed by the telemetry collector (user based on entitlements) and storing schema definitions (schema catalog) (0002; enforcing the user entitlements to the derived datasets based on commands from the entitlement management API, receiving a request containing the authentication token, accessing a derived dataset schema catalog to interpret the entitlement filter, retrieving the user entitlements based on information in the authentication token, and separating entitlement user tables and entitlement role tables within the entitlement management API, wherein the entitlement management API and the entitlement retrieval API are separated within an entitlement service to enforce the user entitlements.) for new and existing telemetry data for registering the producers, consumers, and telemetry datasets; ([0040] FIG. 6 is a conceptual diagram illustrating an example 600 of an entitlement service dataflow. In some implementations, example 600 can illustrate integration points with the other components of a server platform (e.g., Helios™ platform), or a corporate directory service. In some cases, the entitlement service 622 may leverage the service registry and dataset schema catalog to provide expandability and scalability while minimizing the changes in the entitlement service code (e.g., when a new dataset and/or data API/service is added to the platform). Applications 602 can connect to API gateway 604. Applications 602 and API gateway 604 can access a global directory 614. API gateway 604 can transmit a token to the entitlement service 622. The entitlement service 622 can include entitlement retrieval and management APIs 612 (e.g., connected to database 618 and/or database 620 with policy information, role information, service registry, and/or derived dataset(s)). [0041] In some implementations, the entitlement service 622 sends entitlement filters to the API gateway 604 to provide querying with the entitlement filters. The API gateway 604 can send the entitlement filters to the data APIs 606, which retrieves results from derived datasets 616. The data APIs 606 can use a derived dataset(s) schema catalog to interpret entitlement filters. [0042] The entitlement service 622 may derive datasets with entitlement contracts based on the querying with the entitlement filters. The entitlement filters may determine the operation and data access (e.g., read only, administrative privileges, etc.) the user can perform. For example, if the user is logged in with read-only permissions on a derived dataset, the entitlement filter determines the user can access, but not update or delete the data. [0052] In some implementations, within a data-domain and action pair, granular control (e.g., based on the row and column level filters) is put on for each role independently (e.g., for each data-domain and action pair separately) using any column/attribute value (e.g., separate policies for read, create, update, and delete for the asset core data for each role independently, and additionally for the telemetry data, workorder data, and user data). In some cases, the granular control and scalability is multiple levels. [0017] In some implementations, separation between entitlement core services (e.g., policy management and policy decision) and entitlement enforcement (policy enforcement) provides the benefits of using the same entitlement core services (technology agnostic) for enforcement on multiple types of data APIs and storage (e.g., database) technologies. In some cases, the service registry and data schema catalog serve as a bridge between enforcement on multiple types of data APIs and storage technology) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Chao] to include [using schemas to manage entitlements and access rights or user for accessing telemetry data and storage] as is taught by [Gupta]. The suggestion/motivation for doing so is to [control data access [0001-0005]]. Chao in view of Gupta do not explicitly teach a security control component restrict production and use based on compliance regulations regarding security and restricted use of the telemetry data; In an analogous art Ko teaches teach a security control component (0046; control unit) restrict production (transmission by IoT device) and use (receiving by the application server) based on compliance regulations (authority of the devices) regarding security and restricted use of the telemetry data (iot data); ([0046] The control unit 320 is a configuration for controlling overall IoT platform services, which may register a plurality of IoT devices 110 according to a service algorithm, and may manage and process data transmitted from the registered IoT devices 110. In addition, at a request of the application server 150, the control unit 320 searches and provides data, or it controls the shadow device manager 330 to transmit/receive data so that the received command can be transmitted to the relevant IoT device 110. For the purpose of security, the control unit 320 checks the authority of the IoT device 110 and the application server 150 for the authentication and request, and according to the confirmed authority, the control unit 320 processes requests or data sent from the IoT device 110 and the application server 150.; [0119] The basic function of the IoT platform is to receive the data of the IoT device 710 in good condition and pass it to the application server 750. The IoT platform is adapted to first provide the function for transmitting a control command issued from an application program or “app” of the application server 750 to the IoT device 710 and further provide various featured functions such as an API function, a UI editing function, etc. required for making various IoT services.) It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Chao in view of Gupta] to include [a control unit for managing produced data and accessed data by managing device authority for security reasons] as is taught by [Ko]. The suggestion/motivation for doing so is to [improve Iot platforms and services [0001-0004]]. Conclusion Claims 4, 15, and 17 are not rejected under any prior art rejections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDERRAHMEN H CHOUAT whose telephone number is (571)431-0695. The examiner can normally be reached on Mon-Fri from 9AM to 5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry, can be reached at telephone number 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. Abderrahmen Chouat Examiner Art Unit 2451 /Chris Parry/Supervisory Patent Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596793
SYSTEM AND METHOD FOR PATTERN-BASED DETECTION AND MITIGATION OF COMPROMISED CREDENTIALS
2y 5m to grant Granted Apr 07, 2026
Patent 12592919
RE-AUTHENTICATION KEY GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12593197
APPLICATION REQUIREMENTS FOR VEHICLE-TO-EVERYTHING APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12547911
CHARACTERIZING A COMPUTERIZED SYSTEM BASED ON CLUSTERS OF KEY PERFORMANCE INDICATORS
2y 5m to grant Granted Feb 10, 2026
Patent 12549643
PUSH NOTIFICATION DISTRIBUTION SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
77%
With Interview (+4.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 267 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month