DETAILED ACTION
Status of the Application
Claims 1-20 have been examined in this application. This communication is the first action on the merits. The information disclosure statement (IDS) submitted on 04/15/2024; was filed with this application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status
This action is a Non-Final Action on the merits in response to the application filed on 01/04/2024.
Claims 1-20 remain pending in this application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are directed towards a method, claims 11-19 are directed towards a system. and claim 20 is directed towards a computer-readable medium, all of which are among the statutory categories of invention.
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites at least one step or act, including applying data to a model. Thus, the claim is to a process, which is one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
With respect to claims 1-20, the independent claims (claims 1, 11 and 20) are directed to managing user data, In independent claim 1, the bolded limitations emphasized below correspond to the abstract ideas of the claimed invention:
Claim 1, A method comprising:
establishing, by an institution computing system, a connection with an embedded service of the institution computing system within an enterprise resource of a first entity;
authenticating, by the institution computing system, a user of the first entity accessing the embedded service via the enterprise resource;
retrieving, by the institution computing system, first data from one or more data sources, wherein the first data comprises information relating to other entities having one or more attributes corresponding to attributes of the first entity, geographic data corresponding to the first entity, and metrics associated with an entity category corresponding to the first entity and the other entities;
forecasting, by a first artificial intelligence (AI) model of the institution computing system, throughput analytics for a time window based on the first data;
determining, by the institution computing system, a count of second entities which satisfy a selection criteria associated with the first entity, the selection criteria corresponding to the entity category of the first entity and the geographic data corresponding to the first entity;
determining, by a second AI model of the institution computing system, a predicted individual throughput for the first entity, according to the count of second entities and the throughput analytics for the time window;
receiving, by the institution computing system, second data corresponding to a current input corresponding to the throughput analytics, and historical inputs;
generating, by the institution computing system, a graphical user interface for rendering via the embedded service within a user interface of the enterprise resource, the graphical user interface comprising a recommendation corresponding to a current throughput based on the predicted individual throughput and the current input.
these steps fall within the commercial interaction including business relations (See MPEP 2106.04(a)(2), subsection II).
Regarding steps of:
establishing, by an institution computing system, a connection with an embedded service of the institution computing system within an enterprise resource of a first entity;
authenticating, by the institution computing system, a user of the first entity accessing the embedded service via the enterprise resource;
retrieving, by the institution computing system, first data from one or more data sources, wherein the first data comprises information relating to other entities having one or more attributes corresponding to attributes of the first entity, geographic data corresponding to the first entity, and metrics associated with an entity category corresponding to the first entity and the other entities;
forecasting, by a first artificial intelligence (AI) model of the institution computing system, throughput analytics for a time window based on the first data;
determining, by the institution computing system, a count of second entities which satisfy a selection criteria associated with the first entity, the selection criteria corresponding to the entity category of the first entity and the geographic data corresponding to the first entity;
determining, by a second AI model of the institution computing system, a predicted individual throughput for the first entity, according to the count of second entities and the throughput analytics for the time window;
receiving, by the institution computing system, second data corresponding to a current input corresponding to the throughput analytics, and historical inputs;
generating, by the institution computing system, a graphical user interface for rendering via the embedded service within a user interface of the enterprise resource, the graphical user interface comprising a recommendation corresponding to a current throughput based on the predicted individual throughput and the current input.
The claim does not impose any limits on how the data is output or require any particular components that are used to output the data. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The claim recites the additional elements of system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit. The claims recite the steps are performed by the system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit.
The limitations of
establishing, by an institution computing system, a connection with an embedded service of the institution computing system within an enterprise resource of a first entity;
authenticating, by the institution computing system, a user of the first entity accessing the embedded service via the enterprise resource;
retrieving, by the institution computing system, first data from one or more data sources, wherein the first data comprises information relating to other entities having one or more attributes corresponding to attributes of the first entity, geographic data corresponding to the first entity, and metrics associated with an entity category corresponding to the first entity and the other entities;
forecasting, by a first artificial intelligence (AI) model of the institution computing system, throughput analytics for a time window based on the first data;
determining, by the institution computing system, a count of second entities which satisfy a selection criteria associated with the first entity, the selection criteria corresponding to the entity category of the first entity and the geographic data corresponding to the first entity;
determining, by a second AI model of the institution computing system, a predicted individual throughput for the first entity, according to the count of second entities and the throughput analytics for the time window;
receiving, by the institution computing system, second data corresponding to a current input corresponding to the throughput analytics, and historical inputs;
generating, by the institution computing system, a graphical user interface for rendering via the embedded service within a user interface of the enterprise resource, the graphical user interface comprising a recommendation corresponding to a current throughput based on the predicted individual throughput and the current input.
are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05.
Further, the limitations are recited as being performed by system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit. The system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit are recited at a high level of generality. In limitation (a), the AI model is used as a tool to perform the generic computer function of receiving data. See MPEP 2106.05(f). The AI model is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). Additionally, claim 1 recites AI model. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As explained with respect to Step 2A, Prong Two, the additional elements are the system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit. The additional elements were found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data gathering and outputting. Then, the machine learning techniques recited in the claim are disclosed at a high-level of generality (see at least Specification [0021 “A machine learning model 104 may be trained on known input-output pairs such that the machine learning model 104 can learn how to predict known outputs given known inputs. Once the machine learning model 104 has learned how to predict known input-output pairs, the machine learning model 104 can operate on unknown inputs to predict an output.”]) and does not amount to significantly more than the abstract idea.
However, a conclusion that an additional element is insignificant extra solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). As discussed in Step 2A, Prong Two above, the recitations of
establishing, by an institution computing system, a connection with an embedded service of the institution computing system within an enterprise resource of a first entity;
authenticating, by the institution computing system, a user of the first entity accessing the embedded service via the enterprise resource;
retrieving, by the institution computing system, first data from one or more data sources, wherein the first data comprises information relating to other entities having one or more attributes corresponding to attributes of the first entity, geographic data corresponding to the first entity, and metrics associated with an entity category corresponding to the first entity and the other entities;
forecasting, by a first artificial intelligence (AI) model of the institution computing system, throughput analytics for a time window based on the first data;
determining, by the institution computing system, a count of second entities which satisfy a selection criteria associated with the first entity, the selection criteria corresponding to the entity category of the first entity and the geographic data corresponding to the first entity;
determining, by a second AI model of the institution computing system, a predicted individual throughput for the first entity, according to the count of second entities and the throughput analytics for the time window;
receiving, by the institution computing system, second data corresponding to a current input corresponding to the throughput analytics, and historical inputs;
generating, by the institution computing system, a graphical user interface for rendering via the embedded service within a user interface of the enterprise resource, the graphical user interface comprising a recommendation corresponding to a current throughput based on the predicted individual throughput and the current input.
are recited at a high level of generality. These elements amount to transmitting data and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. 10 As discussed in Step 2A, Prong Two above, the recitation of a processor to perform limitations amounts to no more than mere instructions to apply the exception using a generic computer component. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. (Step 2B: NO).
Dependent claims 2-10, 12-19 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims. In this case, the claims are rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Thus, the claim is not patent eligible.
Regarding the dependent claims, dependent claims 2, 12 recite computing system for enrolling, tagging, assigning; Claim 6, 16, recite AI model for forecasting; claim 7, 8, 17 recite GUI that comprise a range and heat map; claims 9, 18 recite AI model to generate outputs; claims 10, 19 recite receiving input from the enterprise resource.. The dependent claims 2-10, 12-19 recite limitations that are not technological in nature and merely limits the abstract idea to a particular environment. Claims 2-10, 12-19 recites system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit which are considered an insignificant extra-solution activities of collecting and analyzing data; see MPEP 2106.05(g). Claims 2-10, 12-19 recites system, enterprise resource, artificial intelligence model, graphical user interface, user interface, processor, memory, computer-readable medium, circuit, which merely recites an instruction to apply the abstract idea using a generic computer component; MPEP 2106.05(f). Additionally, claims 2-10, 12-19 recite steps that further narrow the abstract idea. No additional elements are disclosed in the dependent claims that were not considered in independent claims 1, 11 and 20. Therefore claims 2-7, 9-14, 16-20 do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application Publication US 20180191867, Siebel, et al.
Referring to Claim 1, Siebel teaches a method comprising:
establishing, by an institution computing system, a connection with an embedded service of the institution computing system within an enterprise resource of a first entity (
Siebel: Sec. 0041, The applications apply advanced data aggregation methods, data persistence methods, data analytics, and machine learning methods, embedded in a unique model driven architecture type system embodiment to recommend actions based on real-time and near real-time analysis of petabyte-scale data sets, numerous enterprise and extraprise data sources, and telemetry data from millions to billions of endpoints.
Siebel: Sec. 0398, It may be used for creation and planning of resources including labor, material, and equipment needed to address equipment failure, or complete preventative maintenance to ensure optimal equipment functioning.
Siebel: Sec. 0540, an embedded operating system, and/or a user interface. The sensors or meters can be any type of sensor or meter capable of detecting, measuring, sensing, recording, or otherwise observing any type of phenomenon or activity. A smart, connected device also may include communication components that allow the device to share data relating to operations of an enterprise with one or more entities, such as the enterprise Internet-of-Things application development platform 3002, a manufacturer of the device, other smart, connected devices, other entities, etc. Such communication can allow the enterprise Internet-of-Things application development platform 3002 to perform, for example, rigorous predictive analytics, data exploration, machine learning, and complex data visualization requiring responsive design. The external data sources 3004 a-n also may include other types of data sources.);
authenticating, by the institution computing system, a user of the first entity accessing the embedded service via the enterprise resource (
Siebel: Sec. 0258, The platform services component 1006 provides a plurality of services built-in to an enterprise Internet-of-Things application development platform, such as the system 200 of FIG. 2. The services provided by the platform services component 1006 may include one or more of analytics, application logic, APIs, authentication, authorization, auto-scaling, data, deployment, logging, monitoring, multi-tenancy for smart grid or other applications, profiling, performance, system, management, scheduler, and/or other services, such as those discussed herein.
Siebel: Sec. 0301, The monitoring tools also allow users to centrally manage applications, users, and access rules for their enterprise cloud and easily authenticate existing users from directory services.);
retrieving, by the institution computing system, first data from one or more data sources, wherein the first data comprises information relating to other entities having one or more attributes corresponding to attributes of the first entity, geographic data corresponding to the first entity, and metrics associated with an entity category corresponding to the first entity and the other entities (
Siebel: Sec. 0535, The application can be trained to detect or predict events using a range of attributes for each individual customer, a comparison of that customer to other customers with similar profiles, and the network performance characteristics that would have affected the customer's experience, and can include the following: specific point of customer acquisition; device and plan purchase history; transaction and offer history; web, call center, and mobile app usage history and resulting actions; prior disconnect and payment delinquency scores from similar customers; revenue-related actions taken by similar customers; and/or network quality in a customer's most frequented locations.);
forecasting, by a first artificial intelligence (AI) model of the institution computing system, throughput analytics for a time window based on the first data (
Siebel: Sec. 0373, At 2020, the data is sent to an external system. For example, the data may be sent right after processing, at a scheduling time, and/or in response to a specific request. The data may be sent in raw or aggregated (or processed) format to an external or third party system. A request from an external system may include a meter ID or concentrator ID, time window, and/or measurement type.
Siebel: Sec. 0493, through the use of machine learning models that detect unusual patterns of behavior; automatically increase the long-term accuracy of loss detection with detection algorithms that use machine learning to incorporate verified results into future opportunity identification; forecast and confirm the financial impact of investigation efforts through detailed information regarding the benefits of identified and verified opportunities.
Siebel: Sec. 0494, The loss detection application may provide advanced pipeline management to identify and prioritizes high value and high likelihood leads, using machine learning algorithms and leveraging historical, confirmed instances of fraud or malfunction. The loss detection application may provide investigation management and feedback that automatically tracks identified loss cases, work orders, resolution confirmations, and investigation results. The loss detection application may provide revenue reporting and monitoring that delivers pre-built reports and dashboards, provides ad hoc reporting tools for opportunity reporting and monitoring, analysis of revenue recovery performance against targets (historical and forecasted), revenue tracking, and investigation results.
Siebel: Sec. 0519, Having correlated all of these data inputs, supply network risk analytics employs machine learning algorithms to identify the most significant, potential production delays and delivery risks associated with each unique product and production line, at any current point in time. The algorithms calculate the associated impacts to customer delivery on a product-byproduct basis, allowing supply chain professionals to identify the granular and geographically-specific effects of forecasted delays, and resulting cost to customers and their own internal operations.);
Siebel describes the use of machine learning for forecasting, in which the Examiner is interpreting as artificial intelligence.
determining, by the institution computing system, a count of second entities which satisfy a selection criteria associated with the first entity, the selection criteria corresponding to the entity category of the first entity and the geographic data corresponding to the first entity (
Siebel: Sec. 0533, The telecommunications services and analytics may draw on and unify all available data about individual customers. Sophisticated machine learning algorithms are applied to these data to create actionable insights and recommended actions for each customer. These recommendations are able to help operators cost-effectively and efficiently target new customers, and increase the lifetime value and customer satisfaction of existing customers. The data sources and types used by the telecommunications services and analytics may include: customer, account, and line characteristics; prior purchase history by customer of products and services from the operator; detailed call and usage records including caller graphs, call quality, and geo-location information; customer service and marketing interactions from call center logs, website logs, and marketing activity; network quality data by geolocation station; and/or third-party demographic data.
Siebel: Sec. 0591, The user can prepare and send a work order to a work order system to investigate some (e.g., cases satisfying a threshold value) (or all) of the cases to determine whether any of the cases involve actual revenue theft. In some embodiments, the enterprise Internet-of-Things application development platform 3002 can provide a work order system for the user, or can be integrated with a work order system utilized by the user. The machine learning and predictions module 3217 can receive information relating to results of the investigation of the cases to determine whether each case is a true positive case involving actual revenue theft or a false positive. The information can be used to train and retrain the machine learning model of the machine learning and predictions module 3217.);
determining, by a second AI model of the institution computing system, a predicted individual throughput for the first entity, according to the count of second entities and the throughput analytics for the time window (
Siebel: Sec. 0346, At 1810, a data persistence node performs a compliance check and processes the data. At 1814, 1816, and 1818, the sensor/device data may be persisted in a high throughput distributed key-value data store. It is often necessary to perform various actions at different stages of a persistent type's lifecycle. Data persistence processes include a variety of callbacks methods for monitoring changes in the lifecycle of persistent types.
Siebel: Sec. 0349, The data may be published to an external or third party system, or be capable of providing them upon request with response times compatible with interactive web applications. The system 1600 may provide a set of REST APIs that enable third party applications to query and access data by sensor, concentrator, time window, and data/measurement type. The REST API may support advanced modes or authentication such as OAuth 2.0 and token based authentication.
Siebel: Sec. 0354, There may be some limits to stream processing, such as a limited window of data available and limited data from other sources systems (e.g., from data that has already been persisted or abstracted by a data services component 204).
Siebel: Sec. 0404, The energy benchmark platform provided the foundation for smart grid analytics applications including AMI, head-end system, and smart meter data applications that capture, validate, process, and analyze large volumes of data from numerous sources including interval meter data and SCADA and meter events. The system architecture was designed as a highly distributed system to process real-time data with high throughput, and high reliability. The system securely processed real-time data from 380,000 concentrators in secondary substations.
Siebel: Sec. 0406, The system demonstrated robust performance, scalability, and reliability characteristics: concentrators manage reliable two-way data communication between the head-end systems and smart meters and other distribution grid devices; data from the concentrators are transferred to the head-end system and processed using lightweight, elastic multi-threaded listeners capable of processing high throughput message decoding/parsing; a distributed queue is used to ensure guaranteed message receipt and persistence to a distributed key-value data store for subsequent processing by the meter data management and analytics systems; data are analyzed in real-time to detect meter and grid events.);
receiving, by the institution computing system, second data corresponding to a current input corresponding to the throughput analytics, and historical inputs (
Siebel: Sec. 0086, data visualization and analysis products may offer visualization and exploration tools, which may be useful for an enterprise, but generally lack complex analytic design and customizability with regard to their data. For example, existing data exploration tools may be capable of processing or displaying snapshots of historical statistical data, but lack offerings that can trigger analytics on real-time or streaming events or deal with complex time-series calculations.
Siebel: Sec. 0235, the continuous data processing component 1004 may analyze large data sets including current and historical data to create reports and new insights. In one embodiment, the continuous data processing component 1004 provides different processing services to process stored or streaming data according to different processing paradigms. In one embodiment, the continuous data processing component 1004 is configured to process data using one or more of Map reduce services, stream services, continuous analytics processing, and iterative processing. In one embodiment, at least some analytical calculations or operations may be performed at a network edge, such as within a sensor, smart device, or system located between a sensor/device and integration component 202.
Siebel: Sec. 0426, The “raw” input to the machine learning process of revenue protection may, according to one embodiment, consist of 38 separate meter signals, including electricity consumption, meter events, work order history, anomalies, etc.
Siebel: Sec. 0499, a system may predict and prioritize high-impact home events (e.g., failure of a refrigerator) across all residential customers based on machine learning analysis of historical event data and near real-time signals. In one embodiment, connected home analytics may provide lead generation for value-added services profile. For example, a system may target, and prioritize customers for value-added services based on the unique home asset risk profile, like the sale of a more efficient hot water heater to customers whose existing hot water heaters are likely to fail.
Siebel: Sec. 0510, Health care analytics may include a suite of applications built on top of a data storage and abstraction system or layer and that predict addressable risk across multiple facets of the patient care lifecycle. The health care analytics may calculate readmission risk to enable healthcare providers to predict hospital readmissions, both for current patients and discharged patients, by applying machine learning analysis to individual electronic medical record data and historical trends to calculate the probability that a patient will be readmitted to the hospital.
Siebel: Sec. 0586, The batch parallel processing analytic services module 3212 may analyze large data sets comprised of current and historical data to create reports and analyses. As just one example, with respect to the energy industry and the utilities sector in particular, such reports and analyses can include periodic Key Performance Indicator (KPI) reporting, historical electricity use analysis, forecasts, outlier analysis, energy efficiency project financial impact analysis, etc. );
generating, by the institution computing system, a graphical user interface for rendering via the embedded service within a user interface of the enterprise resource, the graphical user interface comprising a recommendation corresponding to a current throughput based on the predicted individual throughput and the current input (
Siebel: Sec. 0041, The applications apply advanced data aggregation methods, data persistence methods, data analytics, and machine learning methods, embedded in a unique model driven architecture type system embodiment to recommend actions based on real-time and near real-time analysis of petabyte-scale data sets, numerous enterprise and extraprise data sources, and telemetry data from millions to billions of endpoints.
Siebel: Sec. 0517, Based on data-driven analytics that predict the potential for disruption to parts, labor, and shipments, the supply network risk analytics generate recommendations and options for management teams to mitigate high risk areas of the supply chain and improve supplier planning and supplier portfolio management to create appropriate redundancy, backup, and recovery options where precisely needed.
Siebel: Sec. 0530, identify current customers with high probability to churn, receive recommendations on the products and services an individual customer is most likely to purchase, and pro-actively intercept customers likely to contact customer service. By generating and prioritizing the key actions to take for each customer, telecommunications services and analytics enables operators to cost-effectively and efficiently improve customer satisfaction and lifetime customer value.).
Referring to Claim 2, Siebel teaches the method of claim 1, further comprising:
enrolling, by the institution computing system, the first entity with the embedded service (
Siebel: Sec. 0346, Application developers can register event handlers to persistent events through annotations to specify the classes and lifecycle events of interest. Before the message contents are persisted, the system may verify an ID of a sensor or device, or a status word.);
tagging, by the institution computing system, a profile associated with the first entity with one or more tags, based on the attributes of the first entity; assigning, by the institution computing system, the first entity to the entity category based on the one or more tags applied to the profile (
Siebel: Sec. 0181, the type system (e.g., in a C3 IoT Platform) may group metadata for types or type definitions into customer specific partitions, which may be referred to herein as tenants. The customer specific partitions may be further divided into sub partitions called tags. For example, a system may include a general or root partition that includes one a system partition (system tenant). The system tenant may include one or more tags. The system tenant and/or the tags of the system tenant may include a master partition for system data and/or platform metadata. As another example, the system may include a customer partition with one or more customer specific partitions (tenant for specific customer) for respective customer's companies or organizations. The tenant for the specific customer may also include one or more tags (sub partitions for the tenant). As yet a further example, a customer partition may include one or more customer tenants and the customer tenants may include one or more tags. The tags or customer tenants may correspond to data partitions to keep data and metadata for different customers. For example, the tenants and tags (with their corresponding partitions) may be used to keep metadata or data for the system or different customers separate for security and/or for access control. In one embodiment, all requests for data or types or request to write data include an identifier that identifies a tenant and/or tag to specify the partition corresponding to the request.
Siebel: Sec. 0182, In one embodiment, each tenant or tag can have separate versions of the same types. For example, database tables may be created and/or altered to include metadata or data for types specific to a tenant or tag. A database table may be shared across all tenants or tags within a same environment. The tables may include a union of all columns needed by all versions from all tenants/tags. In one embodiment, upon creation/addition of a type or function to a table within a tenant or tag, data operations immediately available upon provisioning for types and function are immediately callable.
Siebel: Sec. 0535, The application can be trained to detect or predict events using a range of attributes for each individual customer, a comparison of that customer to other customers with similar profiles, and the network performance characteristics that would have affected the customer's experience, and can include the following: specific point of customer acquisition; device and plan purchase history; transaction and offer history; web, call center, and mobile app usage history and resulting actions; prior disconnect and payment delinquency scores from similar customers; revenue-related actions taken by similar customers; and/or network quality in a customer's most frequented locations.).
Siebel describes the tagging of partitions to customers.
Referring to Claim 3, Siebel teaches the method of claim 1, wherein the throughput analytics comprise a regional demand associated with a resource provided by the first entity and the second entities selected which satisfy the selection criteria (
Siebel: Sec. 0548, The dispatcher 3022 accordingly may instruct the cluster manager 3026 to dynamically provision new nodes or release existing nodes based on demand for computing resources. The nodes may be computing nodes or storage nodes in connection with the applications servers 3012, the relational databases 3014, and the key/value stores 3016.
Siebel: Sec. 0550, The cluster manager 3026 may dynamically provision new nodes or release existing nodes based on demand for computing resources. The cluster manager 3026 may implement a group membership services protocol. The cluster manager 3026 also may perform a task monitoring function. The task monitoring function may involve tracking resource usage, such as CPU utilization, the amount of data read/written, storage size, etc.).
Referring to Claim 4, Siebel teaches the method of claim 1, wherein the one or more data sources comprises a first data source of the institution computing system and a second data source of a third-party computing system (
Siebel: Sec. 0214, Data may be formatted or stored based on a canonical data model 702. A first data handler 704 a, a second data handler 704 b, a third data handler 704 c, and a fourth data handler 704 d may use or provide data corresponding to the canonical model 702, but may store, process, or provide the data in a format different than the canonical data model 702. A first data model 706 a, a second data model 706 b, a third data model 706 c, and a fourth data model 706 d represent data formats used by respective data handlers 704 a-704 d. A first transformation rule 708 a defines how to transform data between the first data model 706 a and the canonical data model 702.
Siebel: Sec. 0313, the system 200 of FIG. 2 may be built on infrastructure provided by a third party. For example, some embodiments may use Amazon Web Services™ (AWS) or other infrastructure that provides a collection of remote computing services, also called web services, providing a highly scalable cloud-computing platform.
Siebel: Sec. 0342, At 1720, the data is sent to an external system. For example, the data may be sent right after processing, at a scheduling time, and/or in response to a specific request. The data may be sent in raw or aggregated (or processed) format to an external or third party system.).
Referring to Claim 5, Siebel teaches the method of claim 4, wherein the first data source stores at least some of the first data associated with the first entity, and second data corresponding to at least some of the second entities (
Siebel: Sec. 0215, if a first application needs to provide data to a second application, the first application only needs to transform data according to the canonical data model and let the second application or a corresponding transformation place the data in the format needed for processing by the second application. As another example, each transformation rule 708 a-708 d may be defined by a transformation of a canonical type definition, discussed previously. The canonical data model 702 provides an additional level of indirection between application's individual data formats. If a new application is added to the integration solution only transformation between the canonical data model has to created, independent from the number of applications/data handlers that already participate.
Siebel: Sec. 0568, The integration services module 3204 serves as a second layer of data validation or proofing, ensuring that data is error-free before it is loaded into a database or store. The integration services module 3204 receives data from the data integrator module 3202, monitors the data as it flows in, performs a second round of data checks, and passes data to the data services module 3206 to be stored.
Siebel: Sec. 0599, For second types of data, the process 3300 proceeds from block 3306 to block 3316. At block 3316, the data services module 3206 provides the second types of data to the key/value store 3016. As just one example, with respect to the energy industry and the utilities sector in particular, one example type of the second types of data is “raw” meter data relating to energy usage. With respect to the utilities sector, other examples of data stored in the key/value store 3016 may include meter readings, meter events, weather measurements such as temperature, relative humidity, dew point, downward infrared irradiance, and asset state changes. At block 3318, the key/value store 3016 stores the second types of data. At block 3320, the normalization module 3214 normalizes the second types of data. Normalization may involve, for example, filling in gaps or addressing outliers in the data. The normalization algorithms may be provided by the enterprise Internet-of-Things application development platform 3002 or the user. At block 3322, the key/value store 3016 stores the normalized second types of data.
Siebel: Sec. 0622, transforming based on a plurality of transformation rules configured to convert data from a source from a first type to a second type.).
Referring to Claim 6, Siebel teaches the method of claim 5, wherein the first Al model is trained on data from a plurality of entities, at least some of which are assigned to the entity category of the first entity, and wherein the first AI model forecasts throughput analytics using the first data retrieved from the first data source and the second data source (
Siebel: Sec. 0324, Industry data can be modeled and forecasted across various locations and scenarios. Industry data can be benchmarked against industry standards as well as internal benchmarks of the enterprise. The performance of one component or aspect of operations of an enterprise can be compared to identify outliers for potential responsive measures (e.g., improvements).
Siebel: Sec. 0493, that use machine learning to incorporate verified results into future opportunity identification; forecast and confirm the financial impact of investigation efforts through detailed information regarding the benefits of identified and verified opportunities.
Siebel: Sec. 0519, Having correlated all of these data inputs, supply network risk analytics employs machine learning algorithms to identify the most significant, potential production delays and delivery risks associated with each unique product and production line, at any current point in time. The algorithms calculate the associated impacts to customer delivery on a product-byproduct basis, allowing supply chain professionals to identify the granular and geographically-specific effects of forecasted delays, and resulting cost to customers and their own internal operations.
Siebel: Sec. 0521, comprehensive data aggregation and multiple scenario analysis of the historic likelihood of internal and external disruptions, with associated impacts and costs of potentially incurred disruptions; reduced costs of implementing a resilient supply network through data-driven sourcing options, appropriately sized and appropriately located based on accurate risk-adjusted supply forecasts; and/or increased flexibility of the supply chain through predictive identification of specific portions of the supply chain with extra capacity or available redundancy.).
Referring to Claim 7, Siebel teaches the method of claim 1, wherein the graphical user interface comprises a range including the recommendation (
Siebel: Sec. 0263, The application types may include user interface components 1308, application logic, historical/stream 1310, and platform types 1312. A user interface layer 1302 may include graphical user interface type definitions or components 1308 that define the visual experience a user has in a web browser or on a mobile device. User interface types may hold the UI page layout and style for a variety of visual components such as grids, forms, pie charts, histograms, tabs, filters, and more. In an analytics layer 1304, application logic functions, historical batch analytics, and streaming analytics 1310 may be triggered by data flow events. The analytics layer 1304 may provide a connection between the user interface components 1308 and data types residing in the physical data stores. In one embodiment, application logic, calculated expressions, and analytic processing all occur and are manipulated in the analytics layer 1304.
Siebel: Sec. 0517, supply network risk analytics provide managers of enterprise supply chain organizations with comprehensive information and visibility into the risks and impacts of disruption throughout their sourcing, manufacturing, and distribution operations. Supply network risk analytics may identify vulnerable sources of raw materials and components and highlight weakness in hubs and aggregation points, manufacturing facilities, distribution centers, and transportation modes. Based on data-driven analytics that predict the potential for disruption to parts, labor, and shipments, the supply network risk analytics generate recommendations and options for management teams to mitigate high risk areas of the supply chain and improve supplier planning and supplier portfolio management to create appropriate redundancy, backup, and recovery options where precisely needed
Siebel: Sec. 0522, The supply network risk analytics may provide supplier risk recommendations and gap analyses to enable users to quickly identify and characterize unmitigated high risk areas within a supply chain, offering potential redundancy options to speed the assembly of backup option portfolios. The supply network risk analytics may provide dynamic user feedback and live data integration continuously update and improve accuracy of the machine learning risk predictions, by requesting and incorporating user knowledge on specific supplier performance history, known supply bottlenecks, specialized geographical limitations, external events.).
Referring to Claim 8, Siebel teaches the method of claim 1, wherein the graphical user interface comprises a heat map associated with the geographic data corresponding to the first entity (
Siebel: Sec. 0263, The application types may include user interface components 1308, application logic, historical/stream 1310, and platform types 1312. A user interface layer 1302 may include graphical user interface type definitions or components 1308 that define the visual experience a user has in a web browser or on a mobile device. User interface types may hold the UI page layout and style for a variety of visual components such as grids, forms, pie charts, histograms, tabs, filters, and more. In an analytics layer 1304, application logic functions, historical batch analytics, and streaming analytics 1310 may be triggered by data flow events. The analytics layer 1304 may provide a connection between the user interface components 1308 and data types residing in the physical data stores. In one embodiment, application logic, calculated expressions, and analytic processing all occur and are manipulated in the analytics layer 1304.
Siebel: Sec. 0488, The sensor and network health application may provide visualization of health indices across multiple user-defined dimensions—support for heat maps at the system level to enable effective prioritization).
Referring to Claim 9, Siebel teaches the method of claim 1, wherein the second AI model generates an output corresponding to the graphical user interface for rendering via the embedded service within the user interface of the enterprise resource (
Siebel: Sec. 0098, and data science including big-data analytics and machine learning to process the volume, velocity, and variety of big-data streams.
Siebel: Sec. 0099, One or more of the technologies of the computing platforms disclosed herein enable capabilities and applications not previously possible, including precise predictive analytics, massively parallel computing at the edge of a network, and fully connected sensor networks at the core of a business value chain. The number of addressable business processes will grow exponentially and require a new platform for the design, development, deployment, and operation of new generation, real-time, smart and connected applications. Data are strategic resources at the heart of the emerging digital enterprise. The IoT infrastructure software stack will be the nerve center that connects and enables collaboration among previously separate business functions, including product development, market, sales, service support, manufacturing, finance, and human capital management.
Siebel: Sec. 0106, The types in the type system may include defined view configuration types used for rendering type data on a screen in a graphical, text, or other format.
Siebel: Sec. 0595, After analyses are completed by the stream analytic services module 3210 or the batch parallel processing analytic services module 3212, they may be graphically rendered by the UI services module 3224, provided to the appropriate application of the enterprise Internet-of-Things application development platform 3002, and ultimately presented on a computer system (e.g., machine) of the user. This delivers data insights to users in an intuitive and easy-to-understand format.).
Referring to Claim 10, Siebel teaches the method of claim1, wherein the second data corresponding to the current input is received from at least one of the enterprise resource or from a data source of the one or more data sources maintained by the institution computing system (
Siebel: Sec. 0235, The continuous data processing component 1004 is configured to provide processing services and algorithms to perform calculations and analytics against persisted or received data. For example, the continuous data processing component 1004 may analyze large data sets including current and historical data to create reports and new insights.
Siebel: Sec. 0248, A plurality of processing nodes (processors 1206) may process the messages according to a processing analytic or requirement and produce output 1208, which may represent current trends or events identified in the data streams 1202. In one embodiment, each stream service is a function of a data flow event argument that encapsulates a stream of data coming from a sensor data or some other measurement device. A stream service may have an optional “category”, which can be used to group them into related products (e.g., “AssetMgmt”, “Outage”). Analytics may also pre-calculate values using a method or component that determine or loads a current context. Determining the current context may be performed once per analytic or source type and the value may be passed as an argument to a processor 1206 that is performing a function. This provides a way to optimize processing based on state across multiple different analytics executions for different time ranges.).
Claims 11-19 recite limitations that stand rejected via the art citations and rationale applied to claims 1-10. Regarding a processing circuit comprising one or more processors and memory, the memory storing instructions that, when executed, cause the processing circuit to (
Siebel: Sec. 0609, The machine 3600 includes a processor 3602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 3604, and a nonvolatile memory 3606 (e.g., volatile RAM and non-volatile RAM), which communicate with each other via a bus 3608. In some embodiments, the machine 3400 can be a desktop computer, a laptop computer, personal digital assistant (PDA), or mobile phone, for example. In one embodiment, the machine 3400 also includes a video display 3610, an alphanumeric input device 3612 (e.g., a keyboard), a cursor control device 3614 (e.g., a mouse), a drive unit 3616, a signal generation device 3618 (e.g., a speaker) and a network interface device 3620.Siebel: Sec. 0610, In one embodiment, the video display 3610 includes a touch sensitive screen for user input. In one embodiment, the touch sensitive screen is used instead of a keyboard and mouse. The disk drive unit 3616 includes a machine-readable medium 3622 on which is stored one or more sets of instructions 3624 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 3624 can also reside, completely or at least partially, within the main memory 3604 and/or within the processor 3602 during execution thereof by the computer system 3400. The instructions 3624 can further be transmitted or received over a network 3640 via the network interface device 3620. In some embodiments, the machine-readable medium 3622 also includes a database 3625.
Siebel: Sec. 0684, the computing device may include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. The volatile and non-volatile memory and/or storage elements may be a RAM, an EPROM, a flash drive, an optical drive, a magnetic hard drive, or another medium for storing electronic data.
Siebel: Sec. 0685, a component, system, module, or layer may be implemented as a hardware circuit comprising custom very large scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A component, system, module, or layer may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.):
Claim 20 recite limitations that stand rejected via the art citations and rationale applied to claim 1. Regarding a non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a processing circuit, cause the processing circuit to (
Siebel: Sec. 0610, The disk drive unit 3616 includes a machine-readable medium 3622 on which is stored one or more sets of instructions 3624 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 3624 can also reside, completely or at least partially, within the main memory 3604 and/or within the processor 3602 during execution thereof by the computer system 3400. The instructions 3624 can further be transmitted or received over a network 3640 via the network interface device 3620. In some embodiments, the machine-readable medium 3622 also includes a database 3625.
Siebel: Sec. 0684, Various techniques, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, a non-transitory computer readable storage medium, or any other machine readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various techniques. In the case of program code execution on programmable computers, the computing device may include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. The volatile and non-volatile memory and/or storage elements may be a RAM, an EPROM, a flash drive, an optical drive, a magnetic hard drive, or another medium for storing electronic data. ):
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bjonerud et al., U.S. Pub. 20190102835, (discussing the use of artificial intelligence for determining trends and patterns ).
Cella et al., W.O. Pub. 2020092426A2, (discussing the managing of resources ).
Bapat et al., Revolutionizing Market Analysis Using Machine Intelligence, Trend Prediction, And Large-Scale Data Processing, https://wjarr.com/content/revolutionizing-market-analysis-using-machine-intelligence-trend-prediction-and-large-scale, World Journal of Advanced Research and Reviews, 2023 (discussing the use of artificial intelligence to determine trends).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UCHE BYRD whose telephone number is (571)272-3113. The examiner can normally be reached Mon.-Fri..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UCHE BYRD/Examiner, Art Unit 3624