Prosecution Insights
Last updated: April 19, 2026
Application No. 18/051,751

SYSTEM AND METHOD FOR CENTRALIZED OPERATIONS MANAGEMENT

Non-Final OA §101§103
Filed
Nov 01, 2022
Examiner
HOLZMACHER, DERICK J
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Honeywell International Inc.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
73%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
120 granted / 270 resolved
-7.6% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
303
Total Applications
across all art units

Statute-Specific Performance

§101
42.6%
+2.6% vs TC avg
§103
28.9%
-11.1% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 270 resolved cases

Office Action

§101 §103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims 2. Claims 1-4, 6-11, 13-18 and 20 are currently pending. Claims 1-2, 8-10 and 15-16 have been amended. Claims 1-4, 6-11, 13-18 and 20 have been rejected. Status of the Application 3. Claims 1-4, 6-11, 13-18 and 20 are currently pending and have been examined in this application. This communication is the first action on the merits. Response to Amendments 4. Applicant’s amendment filed on 02/17/2026 necessitated new grounds of rejection in this office action. Continued Examination under 37 CFR 1.114 5. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/11/2026 has been entered. Response to Arguments 6. Applicant’s arguments, see pages 15-16 of 16 filed on 02/17/2026, with respect to the 35 U.S.C. § 103 Claim Rejections for Claims 1-3, 8-10 and 15-17 have been fully considered and are found to be not persuasive. Applicant’s arguments with respect to Claims 1-4, 6-11, 13-18 and 20 have been considered, but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Response to 35 U.S.C. § 101 Arguments 7. Applicant’s 35 U.S.C. § 101 arguments, filed with respect to Claims 1-4, 6-11, 13-18 and 20 have been fully considered, but they are found not persuasive (see Applicant Remarks, Pages 13-16, dated 02/17/2026). Examiner respectfully disagrees. Argument #1: (A). Applicant argues that Claims 1-4, 6-11, 13-18 and 20 do not recite an abstract idea, law of nature of natural phenomenon under revised step 2a prong one of the 35 U.S.C § 101 analysis (see Applicant Remarks, Page 14 of 17, dated 02/17/2026). Examiner respectfully disagrees. Specifically, Applicant argues that the claimed invention for amended claim limitations of Independent Claims 1, 8 and 15 cannot be grouped as a mental process because the claimed invention cannot be performed in the human mind, wherein the MPEP indicates that a claim limitation is not a mental process when it “cannot be practically be performed in the mind” (see Applicant Remarks, Page 14, dated 02/17/2026). Examiner respectfully disagrees. In response to Applicant’s remarks here for step 2a prong 1, Examiner notes that despite the use of APIs and controllers, Examiner argues that the substance of these claims is a high-level logic flow that a human could theoretically follow. The Applicant’s argument that the invention cannot be a mental process because a human cannot “provide an API” or “dynamically adjust parameters” at computer speeds – is unpersuasive under the 2019 PEG and MPEP § 2106.04 (a) (2) due to the following reasons. First, the USPTO and the courts (e.g., Electric Power Groups v. Alstom) have repeatedly held that the mere fact that a process is performed more quickly or accurately on a computer does not transform an abstract idea into a non-abstract one. The steps of “obtaining data”, “translating” it from one format to another, “comparing” it to a target, and “providing a command” are the fundamental steps of a logic-based decision-making process. A human can perform these exact steps (e.g., a manager looking at a report, translating the units, comparing them to a goal, and issuing an order). Performing these steps via an “API” is simply a high-speed automation of a mental task. Reason #2: The Steps are Functional and Result-Oriented. The claim language uses broad, functional terms like “determining”, “comparing” and “adjusting”. The claims do not describe a specific technical improvement to how an API functions or how a controller processes signals at a circuit level. Instead, they describe what the system does (the mental logic of monitoring and adjusting) rather than how the computer’s internal functionality is improved. Reason #3: Use of General Computer Components. The “site management controller” and “APIs” are recited as generic computer components used as tools to execute the logic. Just as using a calculator to perform complex math doesn’t stop the math from being a “mathematical concept”, using an API to “translate data” doesn’t stop the act of translation from being a “mental process” or “method of organizing human activity”. Reason #4: Judicial Precedent. In Synopsys, Inc. v. Mentor Graphics Corp., the court found that a process that can be performed mentally – even if it would take a human a very long time – is still an abstract idea. The steps of “translating” and “comparing” data across sites are essentially logical correlations that a human mind is capable of performing, regardless of the physical medium (API) used to transport that data. In conclusion: Claims 1-4, 6-11, 13-18 and 20 are directed to the concept of data-driven feedback loops. Because the heart of Independent Claims 1, 8 and 15 are the logical“comparison” and “adjustment” rather than a specific technical architectural improvement, it remains grouped as a “Mental Process” under Step 2A, Prong 1. Specifically, Applicant argues that the claimed invention for amended claim limitations of Independent Claims 1, 8 and 15 cannot be grouped as certain methods of organizing human activities as alleged in the Office Action because the claimed features do not recite “fundamental economic principles or practices”, “commercial or legal interactions” or “managing personal behavior or relationships or interactions between people” (see Applicant Remarks, Page 14, dated 02/17/2026). Examiner respectfully disagrees. Even without explicit mention of “money” or “people”, the logic of monitoring and managing distributed operations is a long-standing organizational practice. The Applicant’s contention that the claims avoid this grouping because they lack “commercial” or “legal” language is misplaced. Under MPEP § 2106.04 (a) (2), “Certain Methods of Organizing Human Activity” includes high-level concepts of management and coordination that have historically been performed by people, regardless of the technological “tools” (APIs) used to execute them. While the Applicant argues the claims are “machine-centric”, the “plurality of sites” and “management controllers” describe a relationship or interaction between entities. Coordinating activities across different “sites” is a fundamental method of organizing a business or project. Whether the “sites” are factories, offices, or remote nodes, the act of collecting data from one location to influence an action at another is a basic organizational interaction. The “site management controller” is simply a digital proxy for a human supervisor who traditionally “obtains data”, “compares it to a target” and “issues a command”. The claims recite “commercial or legal interactions” (in the broadest sense). The USPTO and courts often view “managing a business process” as an abstract idea. “Site management” and “operational parameters” are terms of art in business logistics. The requirement that data from a first site be “compatible” with a second site to reach a “target” describes a workflow or a supply-chain logic. Even if the “tools” are technical, the overarching purpose – ensuring multiple locations meet a specific operational goal – is a commercial management practice. The features are not “technical” improvements to computing. To move out of the “Organizing Human Activity” grouping, the claim must do more than automate a known management process. The “specified API” and “translation” steps do not improve how a computer routes data at the hardware level; they facilitate the organization of information for the purpose of oversight. As in Alice Corp, taking a standard method of coordination (comparing data to a goal) and requiring its performance via computer APIs/Controllers does not change the fundamental nature of the activity from an organizational one to a technical one. In conclusion: Claims 1-4, 6-11, 13-18 and 20 describe a management framework for distributed operations. Because the “interaction” between the sites and the “management” of their parameters are the core of Independent Claims 1, 8 and 15, the invention is properly grouped alternatively as “Certain Methods of Organizing Human Activities”. Argument #2: (B). Applicant argues that Claims 1-4, 6-11, 13-18 and 20 recite additional elements that integrate the judicial exception into a practical application under revised step 2a prong two of the 35 U.S.C. § 101 analysis (see Applicant Remarks, Pages 14-15, dated 02/17/2026). Examiner respectfully disagrees. Specifically, Applicant argues that the claimed invention, as a whole, integrates the elements of the Independent Claims into a practical application under the second prong of the Step 2A analysis, because the elements of the Independent Claims reflect specific improvement to the technical field of site-specific control. In particular, the elements of the Independent Claims recite identifying site specific problems and then, based on these identified problems, causing operational adjustments to be executed at the site to address the site-specific problems (see Applicant Remarks, Pages 14-15, dated 02/17/2026). Examiner respectfully disagrees. In response to Applicant’s remarks under step 2a prong 2, Examiner discloses that the “improvements” are merely results-oriented and don’t provide a specific technical solution. The Office disagrees with the Applicant’s contention that the claims are integrated into a practical application through “specific improvements to the technical field of site-specific control.” Under MPEP § 2106.04 (d) and the 2019 PEG, a claim that merely recites an abstract idea and adds “apply it" or “do it on a computer” does not provide a practical application. The “improvement” is functional, not technical. The Applicant argues that “identifying site-specific problems” and “causing operational adjustments” constitutes a technical improvement. However, these are functional results, not a technical description of how the system achieves these results. Describing the goal of a system (e.g., “addressing a problem” or “adjusting a parameter”) is not the same as describing a technical solution. Per Electric Power Group, simply collecting data from multiple sources and displaying/using it to monitor a system is a “general-purpose” use of computer technology, not an improvement to the technology itself. The Claims recite only WURC steps. The integration of a “site management controller”, “APIs” and “data translation” represents the use of generic computer components for their intended purposes. Moving data from one API to another is a standard function of modern software architecture. These claims do not recite a new type of API or a novel protocol for data translation. Instead, it uses these tools as a “conduit” for the abstract idea of monitoring and adjusting. As in Alice Corp., simply adding a computer-aided “adjustment” to a known logic loop does not create a practical application. The claims lack “specific limitations” for technical control. The “operational adjustments” are not tied to a specific technological process or a unique physical transformation. The claim language is so broad (e.g., “executing a first operational adjustment”) that it could encompass almost any activity, from turning off a light to adjusting a financial ledger. Without a specific technical constraint on how the “adjustment” physically alters the site tools to solve a concrete engineering problem, the claims remain “directed to” the abstract idea of a feedback loop rather than a “practical application” of that loop. Lastly, no improvement to the functioning of the computer itself. The claims do not improve the speed, memory, or security of the controller or the API. The “translation” to a “specified API” is a logic step for data compatibility, not a technical improvement to the data-processing hardware. Because the “improvement” is to the information being managed (the site data) rather than the underlying technology, it fails to meet the threshold for integration under Prong 2. In conclusion: Because Independent Claims 1, 8 and 15 merely automate the abstract process of “monitoring and responding” using generic tools to achieve a functional result, they do not integrate the abstract idea into a practical application. Therefore, the claims, when viewed as a whole, are "directed to" the abstract idea of standardizing and managing remote operations. The claim limitations, including the data translation and dynamic adjustment, do not integrate this abstract idea into a specific, non-generic practical application or provide a technical solution to a technical problem that improves computer technology or another technical field. Thus, Claims 1-4, 6-11, 13-18 and 20 are patent ineligible at Step 2A, Prong 2. Argument #3: (C). Applicant argues that Claims 1-4, 6-11, 13-18 and 20 recite additional elements that amount to significantly more than the recited judicial exceptions under revised step 2B of the 35 U.S.C. § 101 analysis (see Applicant Remarks, Page 15, dated 02/17/2026). Examiner respectfully disagrees. Specifically, Applicant argues in amended Independent Claims 1, 8 and 15 limitations for example provide an inventive concept and amounts to significantly more than the judicial exception itself for dynamically adjusting operational parameters based on real-time telemetry data collected from distributed site tools. The subject matter of amended Independent Claim 1 ensures that corrective actions and operational adjustments are implemented automatically and in real-time, without human intervention, thereby enhancing the facility’s adaptability, resource utilization, and overall operational efficiency (see Applicant Remarks, Page 11 of 16, dated 08/19/2025). Examiner respectfully disagrees. In response, Examiner refers Applicant to Examiner’s 35 U.S.C. 101 analysis section (e.g., Claim Rejections - 35 U.S.C. § 101 section shown below) shown for step 2B particularly for Independent Claims 1, 8 and 15. The claims do not recite additional elements that amount to significantly more than the recited judicial exceptions, because they are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exceptions. The limitations are directed to limitations referenced in MPEP § 2106.05I.A. that are not enough to qualify as significantly more when recited in these claims with the abstract idea which include: (1) adding the words “apply it” (or an equivalent) with the judicial exception, (2) or mere instructions to implement an abstract idea on a computer and providing the results to the user on a computer, and (3) generally linking the use of the judicial exception to a particular technological environment or field of use. For Applicant’s remarks here for step 2B for Independent Claims 1, 8 and 15, Examiner notes that the elements perform only “generic computer functions”. The individual components – controllers, APIs and data translation are used according to their ordinary functions. The “site management controller” is simply a generic processor or computer configured to perform standard data handling. Providing an API to “obtain data” and “export data” are quintessential generic computer functions. Under Alice Corp v. CLS Bank, merely using a computer to perform an abstract idea more efficiently does not constitute an inventive concept. The “ordered combination” adds no technical improvement. Even when viewed as a whole, the steps follow a standard logical progression: Collect -> Translate -> Export -> Adjust. The “ordered combination” of these steps reflects a conventional workflow for data integration. There is no “technical bypass” or unique synchronization described; the steps are performed in their natural, expected order. Unlike Bascom, where the specific location of filter created a technical benefit, this arrangement simply automates the manual process of gathering information for different sources and standardizing it. API translation and parameter adjustment were well-established in the field of distributed systems and industrial automation long before the priority date. The Applicant has failed to identify a specific technical problem (e.g., a memory limitation or network protocol failure) that this specific architecture solves. Instead, the claims solve a “business” or “administrative” problem of managing multiple sites, which is not a technical improvement to the computer itself. Functional Language and “Mental Process” Equivalency. The claims describe what is done (translate, adjust) rather than how it is done at a technical level. The claims rely on purely functional language (e.g., translate the first site data…. to the specified API). It does not recite a specific algorithm or a new data structure that enables this translation. Because the steps can be performed conceptually by a human (e.g., translating data from one format to another using a lookup table), the implementation on a computer remains “routine” and lacks the “significantly more” required by Step 2B. Preemption of Fundamental Data Management. If these claims were found eligible, it would preempt the basic concept of using a standardize API to manage multiple remote sites. Because the claims use generic components to perform the fundamental task of data standardization, it effectively grants a monopoly over the abstract idea of “multi-site translation.” This is exactly what the Alice test seeks to prevent: the patenting of “building blocks” of human ingenuity. Moreover, for example; with respect to Independent Claims 1, 8 and 15, certain/particular limitations shown recite (1) “mere data gathering” (e.g., “obtain first site data from the first site APIs” & “obtain second site data from the second site APIs”) & (2) “mere data outputting” (e.g., “export the first site data” & “export the second site data” (see Independent Claims 1, 8 and 15)), wherein which each of these claim limitations reflects mere insignificant extra-solution activities (see MPEP § 2106.05 (g)). Furthermore, these certain/particular claim limitations as demonstrated above for Independent Claims 1, 8 and 15 reflects Well-Understood, Routine and Conventional Activities (WURC) under MPEP § 2106.05 (d) ii: See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec,838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359,1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Furthermore, site management controllers: these are standard distributed controllers used for decades in industrial automation and network management. APIs for data extraction: Providing an API to “obtain data” is the fundamental purpose of an API. Under Alice, using a computer (or API) to perform an abstract ask faster or more accurately does not constitute an inventive concept. Data Translation: Converting data from one format to another to ensure compatibility with a “specified API” is a core function of middleware., which is routine in the art of software engineering. The Applicant may argue that the “order combination” is unique, but the sequence follows a standard logical data pipeline. Collect -> Standardize -> Export -> Act. This sequence is WURC method for integrating heterogeneous systems. Modern software-defined networking and Industrial Internet of Things architectures utilize exactly this flow – localized controllers’ aggregate data from various tools, translate them into a northbound API format, and adjust network parameters dynamically based on that data. The claims are directed to a “business” or “administrative” improvement managing multiple sites more easily rather than a technical one. The “specified API” does not make the controller run more efficiently; it simply allows it to handle specific types of data. This is a “content-based” or “informational” improvement, which courts have consistently found ineligible. Because the claims use generic functional language (e.g., “translate,” “export”, “adjust”) without reciting a specific, non-conventional algorithm for that translation, it preempts the basic “building blocks” of system integration. The use of “site management controllers” and “API translation” is so ubiquitous in modern cloud and industrial infrastructure that it qualifies as “widely prevalent” in the relevant industry. The Applicant’s “dynamic adjustment” is a standard closed-loop feedback system, a concept as old as the thermostat and a staple of control theory. In conclusion, the combination of the additional elements when factoring individually and as an ordered combination as a whole does not impose a "meaningful limit" on the abstract idea, and thus fails to provide "significantly more" at Step 2B of the Alice test. Claims 1-4, 6-11, 13-18 and 20 are therefore not patent eligible under 35 U.S.C. § 101. Claim Rejections - 35 USC § 101 8. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 9. Claims 1-4, 6-11, 13-18 and 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-4, 6-11, 13-18 and 20 are each focused to a statutory category namely, a “process” or a “method” (Claims 1-4 and 6-7), a “system” or an “apparatus” (Claims 8-11 and 13-14) and a “non-transitory computer-readable medium” or an “article of manufacture” (Claims 15-18 and 20). We proceed to analyze the claims under Step 2A Prong One shown below. Step 2A Prong One: Independent Claims 1, 8 and 15 recites limitations that set forth the abstract idea(s), namely (see in bold except where strikethrough): “” (see Independent Claim 8); “” (see Independent Claim 8); “configuring a first site of a first site of the plurality of sites, the first site including a plurality of first site tools controlled by the first site ” (see Independent Claims 1, 8 and 15); “provide to the first site to obtain data from the plurality of first site tools” (see Independent Claims 1, 8 and 15); “obtain first site data ” (see Independent Claims 1, 8 and 15); “translate the first site data “export the first site data” (see Independent Claims 1, 8 and 15); “configuring a second site of a second site of the plurality of sites, the second site including a plurality of second site tools controlled by the second site” (see Independent Claims 1, 8 and 15); “provide to the second site to obtain data from the plurality of second site tools” (see Independent Claims 1, 8 and 15); “obtain second site data ” (see Independent Claims 1, 8 and 15); “translate the second site data ” (see Independent Claims 1, 8 and 15); “export the second site data” (see Independent Claims 1, 8 and 15); “selected such that the first site data and the second site data are converted to a format compatible ” (see Independent Claims 1, 8 and 15); “dynamically adjusting a first operational parameter associated with the first site based on the first site data that has been translated from wherein dynamically adjusting the first operational parameter comprises” (see Independent Claims 1, 8 and 15); “determining a first operational condition state of the first site based on the first site data” (see Independent Claims 1, 8 and 15); “comparing the first operational condition state of the first site to a first operational condition target” (see Independent Claims 1, 8 and 15); “providing a first operational adjustment command based on the comparison of the first operational condition state of the first site to the first operational condition target, wherein the first operational adjustment command is configured to cause execution of a first operational adjustment at the first site” (see Independent Claims 1, 8 and 15); “dynamically adjusting a second operational parameter associated with the second site based on the second site data that has been translated from , wherein dynamically adjusting the second operational parameter comprises” (see Independent Claims 1, 8 and 15); “determining a second operational condition state of the second site based on the second site data” (see Independent Claims 1, 8 and 15); “comparing the second operational condition state of the second site to a second operational condition target” (see Independent Claims 1, 8 and 15); “providing a second operational adjustment command based on the comparison of the second operational condition state of the second site to the second operational condition target, wherein the second operational adjustment command is configured to cause execution of a second operational adjustment at the second site, wherein the second operational adjustment command is different than the first operational adjustment command” (see Independent Claims 1, 8 and 15). Here, for Independent Claims 1, 8 and 15, these steps are directed to the abstract idea of "automated cross-platform data normalization and control" and comparing operational site data to targets and using that comparison to issue control commands. It involves collecting disparate data, translating it into a standardized format, and using that information to adjust settings, which is a method of managing information and organizing operational activities, not a physical invention itself. The core is data normalization and conditional automated adjustment (collect data -> translate to standard -> act). Mental Process: The steps involving analyzing data to "dynamically adjust" parameters resemble intellectual activities (assessing data and deciding to change a setting) that could be performed mentally, despite being automated. The translation and data analysis portions can be performed mentally or through simple, standard computation, meeting the "mental process" exception. Method of Organizing Human Activity: Configuring site controllers to "provide APIs" and "export data" to a common format is a method of organizing business or operational data, similar to data processing or bookkeeping. The overall scheme of using controllers to manage multi-site tools and their data is fundamentally a system for managing business/tool operational activities. Configuring Controllers (First/Second Site): This step aligns with "managing relationships and legal obligations", as it organizes how site management controllers and tools are structured, classifying them as methods of organizing human activity. Providing and Obtaining Site Data via APIs: This constitutes "collecting, analyzing, classifying, and storing data". It is an abstract idea because it is a routine, high-level function of managing data gathering, often analogous to human observation and manual data gathering. Translating Site Data to Specified API: This is a mental process. Translating between data formats can be performed in the human mind, or using paper/pencil, and is thus categorized as an abstract concept. Exporting Data: This is a, "collecting, analyzing, classifying, and storing data", activity. Dynamically Adjusting Parameters: While automated, the core act of adjusting a parameter based on analysis can be described as a, "method of organizing human activity", or a "mental process", specifically the decision-making step that follows data evaluation. "Determining a first operational condition state..." & "Comparing... to a first operational condition target": This represents a mental process or fundamental cognitive act (comparing current vs. target state). It is akin to manual data analysis or checking if a machine is running "too hot" compared to a set limit. "Providing a first operational adjustment command..." & "...cause execution of a first operational adjustment": This is a method of organizing human activity—specifically, an administrative act of initiating a change in business operations based on the comparison. It is a command decision. "Dynamically adjusting a second operational parameter... translated from the second site APIs": This is a method of organizing human activity utilizing automation to manage data. It focuses on translating information to a standard format (API integration) and automating the administrative task of managing a separate, second, potentially different, operation. These steps are classified as abstract because they: Can be performed in the human mind (collecting data, comparing, deciding on an adjustment). Cover fundamental economic practices or commercial interactions (managing operational sites for optimal performance). Therefore, these abstract idea limitations (as identified above in bold), under their broadest reasonable interpretation of the claims as a whole, cover performance of their limitations as “Certain Methods of Organizing Human Activities” which pertains to (1) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) or (2) fundamental economic principles or practices (including mitigating risk). Additionally, or alternatively, these abstract idea limitations (as identified above in bold), under their broadest reasonable interpretation of the claims as a whole, cover performance of their limitations as “Mental Processes” which pertains to (3) concepts performed in the human mind (including observations or evaluations or judgments) or (4) using pen and paper as a physical aid, which in order to help perform these mental steps does not negate the mental nature of these limitations. The use of "physical aids" in implementing the abstract mental process, does not preclude the claim from reciting an abstract idea. See MPEP § 2106.04(a) III C. That is, other than reciting (e.g., “first site APIs” & “a first site management controller” & “a second site management controller” & “one or more processors” & “a user device” & “a memory” & “second site APIs” & “specified API”), nothing in the claim elements precludes the steps from being performed as “Certain Methods of Organizing Human Activities” which pertains to (1) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions). and additionally, or alternatively as “Mental Processes” which pertains to (2) concepts performed in the human mind (including observations or evaluations or judgments) or (2) using pen and paper as a physical aid and additionally or alternatively as “Certain Methods of Organizing Human Activities” which pertains to (3) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) Therefore, at step 2a prong 1, Yes, Claims 1-4, 6-11, 13-18 and 20 recite an abstract idea. We proceed onto analyzing the claims at step 2a prong 2. Step 2A Prong Two: With respect to Step 2A Prong Two of the eligibility inquiry (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Independent Claim 8 recites additional elements directed to: (e.g., “first site APIs”, “a first site management controller”, “a second site management controller”, “one or more processors”, “a user device”, “a memory”, “second site APIs” & “specified API”). Independent Claims 1 and 15 recites additional elements directed to: (e.g., “first site APIs”, “a first site management controller”, “a second site management controller”, “a user device”, “second site APIs” & “specified API”). These additional elements have been considered individually and in combination, but fail to integrate the abstract idea into a practical application because they amount to using computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment. See MPEP § 2106.05(f) and MPEP § 2106.05(h). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. Therefore, at step 2a prong 2, Claims 1-4, 6-11, 13-18 and 20 are directed to the abstract idea and do not recite additional elements that integrate into a practical application. Step 2B: (As explained in MPEP § 2106.05), it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Independent Claim 8 recites additional elements directed to: (e.g., “a first site management controller” & “a second site management controller” & “one or more processors” & “a user device” & “a memory”). Independent Claims 1 and 15 recites additional elements directed to: (e.g., “a first site management controller” & “a second site management controller” & “a user device”). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (computing environment) and does not amount to significantly more than the abstract idea itself. See MPEP § 2106.05 (f) and MPEP § 2106.05 (h). Notably, Applicant’s Specification suggests that the claimed invention relies on nothing more than a general-purpose computer executing the instructions to implement the invention (e.g., see Applicant’s Specification ¶ [0066-0067]: “CPU 920 may be any type of processor device including, for example, any type of special purpose or a general-purpose microprocessor device. As will be appreciated by persons skilled in the relevant art, CPU 920 also may be a single processor in a multi-core/multiprocessor system, such system operating alone, or in a cluster of computing devices operating in a cluster or server farm.”). Independent Claims 1, 8 and 15: Examiner notes that the additional elements of (e.g., “a first site APIs” & “a second site APIs” & “a specified API”) when considered individually and as an ordered combination (as a whole), these additional elements do not integrate the abstract idea into a practical application under step 2a prong 2 and also secondly does not amount to significantly more than the judicial exceptions under step 2B due to: (1) limiting a particular field of use or technological environment pertaining to monitoring and analyzing operational metrics of a plurality of facilities for centralized operations management using a computer in a field service operations management environment (see MPEP § 2106.05(h)) or (2) reciting mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions (see MPEP § 2106.05(f)). Moreover, with respect to Independent Claims 1, 8 and 15, certain/particular limitations shown recite (1) “mere data gathering” (e.g., “obtain first site data from the first site APIs” & “obtain second site data from the second site APIs”) & (2) “mere data outputting” (e.g., “export the first site data” & “export the second site data” (see Independent Claims 1, 8 and 15)), wherein which each of these claim limitations reflects mere insignificant extra-solution activities (see MPEP § 2106.05 (g)). Furthermore, these certain/particular claim limitations as demonstrated above for Independent Claims 1, 8 and 15 reflects Well-Understood, Routine and Conventional Activities (WURC) under MPEP § 2106.05 (d) ii: See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359,1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Dependent Claims 2-4, 6-7, 9-11, 13-14, 16-18 and 20 recite substantially the same or similar additional elements as addressed above and when considered individually and as an ordered combination (as a whole) with these limitations recite the same abstract idea(s) as shown in Independent Claims 1, 8 and 15 along with further steps/details pertaining to “Certain Methods of Organizing Human Activities” which pertains to (1) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) or (2) fundamental economic principles or practices (including mitigating risk) and additionally or alternatively as “Mental Processes” which pertains to (3) concepts performed in the human mind (including observations or evaluations or judgments) or (4) using pen and paper as a physical aid. Dependent Claims 3-4, 6-7, 10-11, 13-14, 17-18 and 20 further narrow the abstract ideas, and are therefore still ineligible for the reasons previously provided in Step 2A Prong 2 and Step 2B for Independent Claims 1, 8 and 15. Moreover, with respect to Dependent Claims 2, 9 and 16, certain/particular limitations shown recite (1) “mere data gathering” (e.g., “obtain third site data from the third site APIs” (see Dependent Claims 2, 9 and 16)) & (2) “mere data outputting/displaying” (e.g., “export the third site data to the user device” (see Dependent Claims 2, 9 and 16), wherein which each of these claim limitations reflects mere insignificant extra-solution activities (see MPEP § 2106.05 (g)). Furthermore, these certain/particular claim limitations as demonstrated above for Dependent Claims 2, 9 and 16 reflects Well-Understood, Routine and Conventional Activities (WURC) under MPEP § 2106.05 (d) ii: See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec,838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359,1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Dependent Claims 2, 9 and 16: With respect to reliance on (e.g., “third site management controller” & “third site APIs” & “third APIs”) as additional elements shown in Dependent Claims 2, 9 and 16 when considered individually and as an ordered combination (as a whole) in view of these claim limitations, these additional elements does not provide limitations that are indicative of integration into a practical application under step 2a prong 2 and also do not recite additional elements that amount to significantly more than the recited judicial exceptions under step 2B due to: (1) recites mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions by providing the results to the user on a computer (see MPEP § 2106.05 (f)) or (2) limiting a particular field of use or technological environment pertaining to monitoring and analyzing operational metrics of a plurality of facilities for centralized operations management using a computer in a field service operations management environment (see MPEP § 2106.05(h)). The additional element of “application programming interfaces (APIs)” in Claims 1-2, 8-9 and 15-16 does not amount to significantly more than the judicial exceptions under step 2B due to being expressly recognized as Well-Understood, Routine and Conventional (WURC) in the art. See also US PG Pub (US 2016/0358116 A1) hereinafter Cline, et. al. Cline at ¶ [0045]: “A standard API or standard format 165 (FIG. 3) is an import/export services layer used by the third-party content providers to publish and output their updated guidelines into a format that can automatically be uploaded by external rules source 160 and stores in rule warehouse 140.”. The ordered combination of elements in the Dependent Claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. Therefore, under Step 2B, Claims 1-4, 6-11, 13-18 and 20 do not include additional elements that are sufficient to amount to significantly more than the recited judicial exceptions. Thus, Claims 1-4, 6-11, 13-18 and 20 are ineligible with respect to the 35 U.S.C. § 101 analysis. Claim Rejections - 35 USC § 103 10. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 12. Claim 1-3, 8-10 and 15-17 is rejected under 35 U.S.C. 103 as being unpatentable over US PG Pub (US 2020/0365274 A1) hereinafter Karl, et. al., and in view of Foreign Patent Application (WO 2024/086015 A1) hereinafter Nixon, et. al. Regarding Independent Claims 1 and 15, Karl method / non-transitory computer-readable medium for centralized operations management for a plurality of sites teaches the following: - configuring a first site management controller of a first site of the plurality of sites, the first site including a plurality of first site tools controlled by the first site management controller to (see at least Karl: Fig. 3 & ¶ [0050-0053] & ¶ [0079]. Karl teaches that the system may generate a different dashboard and mini-card for each lab metric of the user and its facilities/sites. See also Karl at ¶ [0050]: The scorecard may include a key performance indicator (KPI) portion 302 that lists each metric and a select facility portion 310 that allows the user to change facilities/sites (for any facilities whose data the user is authorized to access), and regenerate the scorecard with the KPIs for the selected facility wherein the selected facility/site may be (for example, a specified hospital). See also Karl at ¶ [0079]: Some other possibilities for implementing aspects include: memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc.); - provide first site application programming interfaces (APIs) to the first site (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) to obtain data from the plurality of first site tools (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”); - obtain first site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the first site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - translate the first site data (see at least Karl: ¶ [0039]: Karl teaches that the acquire module 104A may also translate the data from each data source into a common data format used by the store 104C of the system. The acquire module 104A may also acquire a direct copy of the source data in tabular format. The system may have separate database for the storage of the data from each data source. The acquire module 104A may also process data in the well-known format into a common format, such as by using a commercially available tool from Carista (www.caristaapp.com).) from the first site APIs to a specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - export the first site data (see at least Karl: ¶ [0037-0040] & ¶ [0046] & ¶ [0050-0051]. Karl teaches that each of the modules 104A, 104B and 104D may be alternatively implemented in hardware and each module may be a hardware device that performs the operations and functions of each module as described below. A store 104C of the backend component 104 may be storage for system data, business logic that is part of the conform module 104B that is used to process the incoming healthcare data, the logic used by the analytics module that performs the analysis of the healthcare data and generates the outputs screens and the dashboards, the incoming healthcare data and the conformed healthcare data stored by the system. See also Karl at ¶ [0046]: “The output screens and dashboards generated by the system may be downloaded and displayed on one or more computing devices 106 that can access the backend 104 over a communications path such as a computer network, the Internet, Ethernet, a wireless network and the like. In one implementation, each computing device may execute a well-known browser application and communicate/exchange data with the backend 104 using a known HTML protocol and the backend 104 may have a web server that generates the HTML code that may be sent to each computing device.); - configuring a second site management controller of a second site of the plurality of sites, the second site including a plurality of second site tools controlled by the second site management controller to (see at least Karl: Fig. 3 & ¶ [0050-0053] & ¶ [0079]. Karl teaches that the system may generate a different dashboard and mini-card for each lab metric of the user and its facilities/sites. See also Karl at ¶ [0050]: The scorecard may include a key performance indicator (KPI) portion 302 that lists each metric and a select facility portion 310 that allows the user to change facilities/sites (for any facilities whose data the user is authorized to access), and regenerate the scorecard with the KPIs for the selected facility wherein the selected facility/site may be (for example, a specified hospital). See also Karl at ¶ [0079]: Some other possibilities for implementing aspects include: memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc.) - provide second site APIs to the second site (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) to obtain data from the plurality of second site tools (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”); - obtain second site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the second site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - translate the second site data (see at least Karl: ¶ [0039]: Karl teaches that the acquire module 104A may also translate the data from each data source into a common data format used by the store 104C of the system. The acquire module 104A may also acquire a direct copy of the source data in tabular format. The system may have separate database for the storage of the data from each data source. The acquire module 104A may also process data in the well-known format into a common format, such as by using a commercially available tool from Carista (www.caristaapp.com).) from the second site APIs to the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - export the second site data (see at least Karl: ¶ [0037-0040] & ¶ [0046] & ¶ [0050-0051]. Karl teaches that each of the modules 104A, 104B and 104D may be alternatively implemented in hardware and each module may be a hardware device that performs the operations and functions of each module as described below. A store 104C of the backend component 104 may be storage for system data, business logic that is part of the conform module 104B that is used to process the incoming healthcare data, the logic used by the analytics module that performs the analysis of the healthcare data and generates the outputs screens and the dashboards, the incoming healthcare data and the conformed healthcare data stored by the system. See also Karl at ¶ [0046]: “The output screens and dashboards generated by the system may be downloaded and displayed on one or more computing devices 106 that can access the backend 104 over a communications path such as a computer network, the Internet, Ethernet, a wireless network and the like. In one implementation, each computing device may execute a well-known browser application and communicate/exchange data with the backend 104 using a known HTML protocol and the backend 104 may have a web server that generates the HTML code that may be sent to each computing device.); - wherein the first and second APIs are selected such that (see at least Karl: ¶ [0038-0039].) the first site data and the second site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the first site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) are converted to a format compatible with the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.). Karl method / non-transitory computer-readable medium for centralized operations management for a plurality of sites does not explicitly disclose, but Nixon in the analogous art for centralized operations management for a plurality of sites does disclose the following: - dynamically adjusting a first operational parameter associated with the first site management controller based on the first site data that has been translated from the first site APIs to the specified API, wherein dynamically adjusting the first operational parameter comprises (see at least Nixon: ¶ [0038] & ¶ [0154] & ¶ [0174] & ¶ [0338]. Nixon teaches a first one or more application programming interfaces (APIs) via which the first one or more MEEEs are accessed to obtain a first set of operational parameter values regarding a first one or more operations of the one or more process control or automation systems of the enterprise at a first one or more physical sites. A fleet management application executing in the compute fabric, the fleet management application configured to utilize the first and the second APIs to thereby obtain the first and the second sets of operational parameter values. See also Nixon at ¶ [0154]: The operator or technician may, for example, monitor the run-time operations of an industrial process by receiving, at the operator workstation via the compute fabric, various information associated with operation of the process (e.g., device configuration information, operator displays, process status information, equipment status information and operational parameters, process measurements, setpoints, alerts, and/or other information associated with components 135, 138 of FIG. 1A). The operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0174]: The HCI Adaptor 430 transforms or translates the APIs 428 utilized by the compute fabric application layer 412 into a set of APIs 432 that are understood or otherwise known to or compatible with the customized or adapted general purpose HC operating system 410 of the compute fabric 400. See also Nixon at ¶ [0338]: The GUI 1010 may enable the user to perform other actions associated with the depicted process (e.g., view additional operational parameters, shut down or recommission the process, perform diagnostics on depicted process equipment, etc.). - determining a first operational condition state of the first site based on the first site data (see at least Nixon: Figs. 10F-10G & ¶ [0332] & ¶ [0411]. Nixon notes that the user may specify or define thresholds against which the health of a corresponding entity is to be evaluated, e.g., to indicate the entity as being in good health (z.e., adequate operational state) when a value is above (or below) a threshold, or provide a notification or alert regarding the health of the corresponding entity when the value is below (or above) the threshold. Such health indicators may be configured for any of the physical or logical entities described in the foregoing sections, in various embodiments, and/or for any other physical or logical entity monitored via the NGPCAS using any standard {e.g., DeltaV, APL, HART, WirelessHART, Foundation Fieldbus, OPC UA, etc.). See Diagnostic conditions shown at 1060 of Fig. 10F and site conditions Shown in Fig. 10G for site 1 with system integrity of Good and networking of good and site 2 with system integrity of uncertain and networking of good. See also Nixon at ¶ [0332]: “An enterprise-level diagnostics functionality to provide a real-time view of high-level diagnostics of two or more physical sites operated by the enterprise (e.g., indicating whether processes at each site are online, whether site networks are in normal operational states, etc.).”). - comparing the first operational condition state of the first site to a first operational condition target (see at least Nixon: Figs. 10F-10G & ¶ [0332-0334]. Nixon teaches that a site comparison functionality to enable a user to compare configurations and performance parameters of two or more sites of the enterprise (e.g., to compare operation of similar-process equipment in multiple sites to determine whether differences in site configurations may be associated with different observed performance parameters of the multiple sites). See also Nixon at ¶ [0410]: “Fleet management application(s) may include dashboards or other visualizations to compare the operation of processes implemented via two or more other systems and/or sites, e.g. to compare product production quantity and/or quality, process safety or efficiency metrics, or other process information among the two or more systems/sites.”). - providing a first operational adjustment command to the first site management controller based on the comparison of the first operational condition state of the first site to the first operational condition target, wherein the first operational adjustment command is configured to cause execution of a first operational adjustment at the first site (see at least Nixon: ¶ [0132-0133] & ¶ [0154] & ¶ [0223] & Fig. 10F-10G. Nixon teaches that the operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0132-0133] & Fig. 1B noting “the locations 115A and 118A may have distributed controllers.” See also Nixon at ¶ [0223]: The software defined networking layer 410 automatically, dynamically, and responsively determines, initiates, and performs changes to the allocation of hardware and software resources of the nodes Ny of the computing platform 405 to different application layer software components 412 based on detected conditions, such as improvement in performance of individual logical and/or physical components or groups thereof, degradation of performance of individual logical and/or physical components or groups thereof, fault occurrences, failures of logical and/or physical components, configuration changes (e.g., due to user commands or due to automatic re-configuration by services of the compute fabric 400).) - dynamically adjusting a second operational parameter associated with the second site management controller based on the second site data that has been translated from the second site APIs to the specified API, wherein dynamically adjusting the second operational parameter comprises (see at least Nixon: ¶ [0037] & ¶ [0154] & ¶ [0174] & ¶ [0338]. Nixon teaches that the operator or technician may, for example, monitor the run-time operations of an industrial process by receiving, at the operator workstation via the compute fabric, various information associated with operation of the process (e.g., device configuration information, operator displays, process status information, equipment status information and operational parameters, process measurements, setpoints, alerts, and/or other information associated with components 135, 138 of FIG. 1A). The operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0174]: The HCI Adaptor 430 transforms or translates the APIs 428 utilized by the compute fabric application layer 412 into a set of APIs 432 that are understood or otherwise known to or compatible with the customized or adapted general purpose HC operating system 410 of the compute fabric 400. See also Nixon at ¶ [0338]: The GUI 1010 may enable the user to perform other actions associated with the depicted process (e.g., view additional operational parameters, shut down or recommission the process, perform diagnostics on depicted process equipment, etc. Nixon teaches via the second accessing, obtaining, from the second one or more MEEEs, a second set of operational parameter values regarding a second one or more operations of the one or more process control or automation systems of the enterprise at a second one or more physical sites. The method may include additional, fewer, and/or alternate actions, including various actions described in this disclosure. See also Nixon at ¶ [0422-0425]: The first or the second set of operational parameter values comprise one or more health indicators associated with a health of an industrial or automation process associated with the enterprise. See also Nixon at ¶ [0430-0435]: The display of the at least the portion of the first or the second set of operational values is updated, in real-time, at the configured dashboard GUI as the at least the portion of the first or the second set of operational values change within the one or more process control or automation systems. A fourth one or more APIs, a fourth one or more MEEEs executing in the compute fabric to thereby provide a user interface via which users manually specify or define the first or the second set of operational values to be obtained from the first or the second one or more physical sites. A fifth one or more MEEEs executing in the compute fabric to thereby provide a user interface via which users graphically configure a dashboard graphical user interface (GUI) to display at least a portion of the first or the second set of operational values. See also Nixon at ¶ [0435]: A first one or more application programming interfaces (APIs) via which the first one or more MEEEs are accessed to obtain a first set of operational parameter values regarding a first one or more operations of the one or more process control or automation systems of the enterprise at a first one or more physical sites. A fleet management application executing in the compute fabric, the fleet management application configured to utilize the first and the second APIs to thereby obtain the first and the second sets of operational parameter values.) - determining a second operational condition state of the second site based on the second site data (see at least Nixon: Figs. 10F-10G & ¶ [0332] & ¶ [0411]. Nixon notes that the user may specify or define thresholds against which the health of a corresponding entity is to be evaluated, e.g., to indicate the entity as being in good health (z.e., adequate operational state) when a value is above (or below) a threshold, or provide a notification or alert regarding the health of the corresponding entity when the value is below (or above) the threshold. Such health indicators may be configured for any of the physical or logical entities described in the foregoing sections, in various embodiments, and/or for any other physical or logical entity monitored via the NGPCAS using any standard {e.g., DeltaV, APL, HART, WirelessHART, Foundation Fieldbus, OPC UA, etc.). See Diagnostic conditions shown at 1060 of Fig. 10F and site conditions Shown in Fig. 10G for site 1 with system integrity of Good and networking of good and site 2 with system integrity of uncertain and networking of good. See also Nixon at ¶ [0332]: “An enterprise-level diagnostics functionality to provide a real-time view of high-level diagnostics of two or more physical sites operated by the enterprise (e.g., indicating whether processes at each site are online, whether site networks are in normal operational states, etc.).”). - comparing the second operational condition state of the second site to a second operational condition target (see at least Nixon: Figs. 10F-10G & ¶ [0332-0334]. Nixon teaches that a site comparison functionality to enable a user to compare configurations and performance parameters of two or more sites of the enterprise (e.g., to compare operation of similar-process equipment in multiple sites to determine whether differences in site configurations may be associated with different observed performance parameters of the multiple sites). See also Nixon at ¶ [0410]: “Fleet management application(s) may include dashboards or other visualizations to compare the operation of processes implemented via two or more other systems and/or sites, e.g. to compare product production quantity and/or quality, process safety or efficiency metrics, or other process information among the two or more systems/sites.”). - providing a second operational adjustment command to the second site management controller based on the comparison of the second operational condition state of the first site to the second operational condition target, wherein the second operational adjustment command is configured to cause execution of a second operational adjustment at the second site (see at least Nixon: ¶ [0132-0133] & ¶ [0154] & ¶ [0223] & Fig. 10F-10G. Nixon teaches that the operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0132-0133] & Fig. 1B noting “the locations 115A and 118A may have distributed controllers.” See also Nixon at ¶ [0223]: The software defined networking layer 410 automatically, dynamically, and responsively determines, initiates, and performs changes to the allocation of hardware and software resources of the nodes Ny of the computing platform 405 to different application layer software components 412 based on detected conditions, such as improvement in performance of individual logical and/or physical components or groups thereof, degradation of performance of individual logical and/or physical components or groups thereof, fault occurrences, failures of logical and/or physical components, configuration changes (e.g., due to user commands or due to automatic re-configuration by services of the compute fabric 400).) - wherein the second operational adjustment command is different than the first operational adjustment command (see at least Nixon: Fig. 2B & ¶ [0154] & ¶ [0223]. Nixon notes that that is, as performance, resource needs, and configurations of the various application layer services, subsystems, and other software components of the application layer 412 dynamically change (and/or are dynamically predicted, by services within the application layer 412, to change), the operating system 410 may automatically and responsively adjust and/or manage the usage of hardware and/or software resources of the physical layer 405 to support the needs and the requirements of the application layer 412 for computing, storage, and networking, as well as for other functionalities related to industrial process control and automation.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Karl method / non-transitory computer-readable medium for centralized operations management for centralized operations management for a plurality of sites with the aforementioned teachings of: each of these steps shown above, and in further view of Nixon, whereby operators or technicians at operator workstations located at first, second, third, etc. physical device locations or even at other locations that are unassociated with any physical device location may monitor and perform operations associated with the run-time of portions of an industrial process (e.g., devices, groups of devices, control loops, etc.) executed at different ones of the first, second, third, etc. physical device locations. The operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric) (see at least Nixon: ¶ [0154].). The capabilities the Next Generation Process Control and Automation System NGPCAS, the system provider of the NGPCAS may implement further system monitoring and administrative management functionalities with respect to respective NGPCASs operated by each of one, two, three, four or more enterprises (independently from the enterprises themselves). Generally speaking, centralized monitoring (and/or optimization) functionalities described herein are provided not only for process control functionalities (e.g., operations directly participating in implementation of a process), but also for quality of service and resource management of NGPCAS resources across an entire enterprise (see at least Nixon: ¶ [0373].). Further, the claimed invention is merely a combination of old elements in a similar field for centralized operations management for a plurality of sites and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Nixon, the results of the combination were predictable. Regarding Independent Claim 8, Karl system for centralized operations management for a plurality of sites teaches the following: - a memory having processor-readable instructions stored therein (see at least Karl: ¶ [0037] & Fig. 1.) and one or more processors configured to access the memory and execute the processor-readable instructions, which when executed by the one or more processors configures the one or more processors to perform a plurality of functions, including functions for (see at least Karl: ¶ [0037] & Fig. 1.): - configuring a first site management controller of a first site of the plurality of sites, the first site including a plurality of first site tools controlled by the first site management controller to (see at least Karl: Fig. 3 & ¶ [0050-0053] & ¶ [0079]. Karl teaches that the system may generate a different dashboard and mini-card for each lab metric of the user and its facilities/sites. See also Karl at ¶ [0050]: The scorecard may include a key performance indicator (KPI) portion 302 that lists each metric and a select facility portion 310 that allows the user to change facilities/sites (for any facilities whose data the user is authorized to access), and regenerate the scorecard with the KPIs for the selected facility wherein the selected facility/site may be (for example, a specified hospital). See also Karl at ¶ [0079]: Some other possibilities for implementing aspects include: memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc.) - provide first site application programming interfaces (APIs) to the first site (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) to obtain data from the plurality of first site tools (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”); - obtain first site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the first site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - translate the first site data (see at least Karl: ¶ [0039]: Karl teaches that the acquire module 104A may also translate the data from each data source into a common data format used by the store 104C of the system. The acquire module 104A may also acquire a direct copy of the source data in tabular format. The system may have separate database for the storage of the data from each data source. The acquire module 104A may also process data in the well-known format into a common format, such as by using a commercially available tool from Carista (www.caristaapp.com).) from the first site APIs to a specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - export the first site data (see at least Karl: ¶ [0037-0040] & ¶ [0046] & ¶ [0050-0051]. Karl teaches that each of the modules 104A, 104B and 104D may be alternatively implemented in hardware and each module may be a hardware device that performs the operations and functions of each module as described below. A store 104C of the backend component 104 may be storage for system data, business logic that is part of the conform module 104B that is used to process the incoming healthcare data, the logic used by the analytics module that performs the analysis of the healthcare data and generates the outputs screens and the dashboards, the incoming healthcare data and the conformed healthcare data stored by the system. See also Karl at ¶ [0046]: “The output screens and dashboards generated by the system may be downloaded and displayed on one or more computing devices 106 that can access the backend 104 over a communications path such as a computer network, the Internet, Ethernet, a wireless network and the like. In one implementation, each computing device may execute a well-known browser application and communicate/exchange data with the backend 104 using a known HTML protocol and the backend 104 may have a web server that generates the HTML code that may be sent to each computing device.); - configuring a second site management controller of a second site of the plurality of sites, the second site including a plurality of second site tools controlled by the second site management controller to (see at least Karl: Fig. 3 & ¶ [0050-0053] & ¶ [0079]. Karl teaches that the system may generate a different dashboard and mini-card for each lab metric of the user and its facilities/sites. See also Karl at ¶ [0050]: The scorecard may include a key performance indicator (KPI) portion 302 that lists each metric and a select facility portion 310 that allows the user to change facilities/sites (for any facilities whose data the user is authorized to access), and regenerate the scorecard with the KPIs for the selected facility wherein the selected facility/site may be (for example, a specified hospital). See also Karl at ¶ [0079]: Some other possibilities for implementing aspects include: memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc.) - provide second site APIs to the second site (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) to obtain data from the plurality of second site tools (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”); - obtain second site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the second site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - translate the second site data (see at least Karl: ¶ [0039]: Karl teaches that the acquire module 104A may also translate the data from each data source into a common data format used by the store 104C of the system. The acquire module 104A may also acquire a direct copy of the source data in tabular format. The system may have separate database for the storage of the data from each data source. The acquire module 104A may also process data in the well-known format into a common format, such as by using a commercially available tool from Carista (www.caristaapp.com).) from the second site APIs to the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - export the second site data (see at least Karl: ¶ [0037-0040] & ¶ [0046] & ¶ [0050-0051]. Karl teaches that each of the modules 104A, 104B and 104D may be alternatively implemented in hardware and each module may be a hardware device that performs the operations and functions of each module as described below. A store 104C of the backend component 104 may be storage for system data, business logic that is part of the conform module 104B that is used to process the incoming healthcare data, the logic used by the analytics module that performs the analysis of the healthcare data and generates the outputs screens and the dashboards, the incoming healthcare data and the conformed healthcare data stored by the system. See also Karl at ¶ [0046]: “The output screens and dashboards generated by the system may be downloaded and displayed on one or more computing devices 106 that can access the backend 104 over a communications path such as a computer network, the Internet, Ethernet, a wireless network and the like. In one implementation, each computing device may execute a well-known browser application and communicate/exchange data with the backend 104 using a known HTML protocol and the backend 104 may have a web server that generates the HTML code that may be sent to each computing device.); - wherein the first and second APIs are selected such that (see at least Karl: ¶ [0038-0039].) the first site data and the second site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the first site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) are converted to a format compatible with the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.). Karl system for centralized operations management for a plurality of sites does not explicitly disclose, but Nixon in the analogous art for centralized operations management for a plurality of sites does disclose the following: - dynamically adjusting a first operational parameter associated with the first site management controller based on the first site data that has been translated from the first site APIs to the specified API, wherein dynamically adjusting the first operational parameter comprises (see at least Nixon: ¶ [0038] & ¶ [0154] & ¶ [0174] & ¶ [0338]. Nixon teaches a first one or more application programming interfaces (APIs) via which the first one or more MEEEs are accessed to obtain a first set of operational parameter values regarding a first one or more operations of the one or more process control or automation systems of the enterprise at a first one or more physical sites. A fleet management application executing in the compute fabric, the fleet management application configured to utilize the first and the second APIs to thereby obtain the first and the second sets of operational parameter values. See also Nixon at ¶ [0154]: The operator or technician may, for example, monitor the run-time operations of an industrial process by receiving, at the operator workstation via the compute fabric, various information associated with operation of the process (e.g., device configuration information, operator displays, process status information, equipment status information and operational parameters, process measurements, setpoints, alerts, and/or other information associated with components 135, 138 of FIG. 1A). The operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0174]: The HCI Adaptor 430 transforms or translates the APIs 428 utilized by the compute fabric application layer 412 into a set of APIs 432 that are understood or otherwise known to or compatible with the customized or adapted general purpose HC operating system 410 of the compute fabric 400. See also Nixon at ¶ [0338]: The GUI 1010 may enable the user to perform other actions associated with the depicted process (e.g., view additional operational parameters, shut down or recommission the process, perform diagnostics on depicted process equipment, etc.). - determining a first operational condition state of the first site based on the first site data (see at least Nixon: Figs. 10F-10G & ¶ [0332] & ¶ [0411]. Nixon notes that the user may specify or define thresholds against which the health of a corresponding entity is to be evaluated, e.g., to indicate the entity as being in good health (z.e., adequate operational state) when a value is above (or below) a threshold, or provide a notification or alert regarding the health of the corresponding entity when the value is below (or above) the threshold. Such health indicators may be configured for any of the physical or logical entities described in the foregoing sections, in various embodiments, and/or for any other physical or logical entity monitored via the NGPCAS using any standard {e.g., DeltaV, APL, HART, WirelessHART, Foundation Fieldbus, OPC UA, etc.). See Diagnostic conditions shown at 1060 of Fig. 10F and site conditions Shown in Fig. 10G for site 1 with system integrity of Good and networking of good and site 2 with system integrity of uncertain and networking of good. See also Nixon at ¶ [0332]: “An enterprise-level diagnostics functionality to provide a real-time view of high-level diagnostics of two or more physical sites operated by the enterprise (e.g., indicating whether processes at each site are online, whether site networks are in normal operational states, etc.).”). - comparing the first operational condition state of the first site to a first operational condition target (see at least Nixon: Figs. 10F-10G & ¶ [0332-0334]. Nixon teaches that a site comparison functionality to enable a user to compare configurations and performance parameters of two or more sites of the enterprise (e.g., to compare operation of similar-process equipment in multiple sites to determine whether differences in site configurations may be associated with different observed performance parameters of the multiple sites). See also Nixon at ¶ [0410]: “Fleet management application(s) may include dashboards or other visualizations to compare the operation of processes implemented via two or more other systems and/or sites, e.g. to compare product production quantity and/or quality, process safety or efficiency metrics, or other process information among the two or more systems/sites.”). - providing a first operational adjustment command to the first site management controller based on the comparison of the first operational condition state of the first site to the first operational condition target, wherein the first operational adjustment command is configured to cause execution of a first operational adjustment at the first site (see at least Nixon: ¶ [0132-0133] & ¶ [0154] & ¶ [0223] & Fig. 10F-10G. Nixon teaches that the operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0132-0133] & Fig. 1B noting “the locations 115A and 118A may have distributed controllers.” See also Nixon at ¶ [0223]: The software defined networking layer 410 automatically, dynamically, and responsively determines, initiates, and performs changes to the allocation of hardware and software resources of the nodes Ny of the computing platform 405 to different application layer software components 412 based on detected conditions, such as improvement in performance of individual logical and/or physical components or groups thereof, degradation of performance of individual logical and/or physical components or groups thereof, fault occurrences, failures of logical and/or physical components, configuration changes (e.g., due to user commands or due to automatic re-configuration by services of the compute fabric 400).) - dynamically adjusting a second operational parameter associated with the second site management controller based on the second site data that has been translated from the second site APIs to the specified API, wherein dynamically adjusting the second operational parameter comprises (see at least Nixon: ¶ [0037] & ¶ [0154] & ¶ [0174] & ¶ [0338]. Nixon teaches that the operator or technician may, for example, monitor the run-time operations of an industrial process by receiving, at the operator workstation via the compute fabric, various information associated with operation of the process (e.g., device configuration information, operator displays, process status information, equipment status information and operational parameters, process measurements, setpoints, alerts, and/or other information associated with components 135, 138 of FIG. 1A). The operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0174]: The HCI Adaptor 430 transforms or translates the APIs 428 utilized by the compute fabric application layer 412 into a set of APIs 432 that are understood or otherwise known to or compatible with the customized or adapted general purpose HC operating system 410 of the compute fabric 400. See also Nixon at ¶ [0338]: The GUI 1010 may enable the user to perform other actions associated with the depicted process (e.g., view additional operational parameters, shut down or recommission the process, perform diagnostics on depicted process equipment, etc. Nixon teaches via the second accessing, obtaining, from the second one or more MEEEs, a second set of operational parameter values regarding a second one or more operations of the one or more process control or automation systems of the enterprise at a second one or more physical sites. The method may include additional, fewer, and/or alternate actions, including various actions described in this disclosure. See also Nixon at ¶ [0422-0425]: The first or the second set of operational parameter values comprise one or more health indicators associated with a health of an industrial or automation process associated with the enterprise. See also Nixon at ¶ [0430-0435]: The display of the at least the portion of the first or the second set of operational values is updated, in real-time, at the configured dashboard GUI as the at least the portion of the first or the second set of operational values change within the one or more process control or automation systems. A fourth one or more APIs, a fourth one or more MEEEs executing in the compute fabric to thereby provide a user interface via which users manually specify or define the first or the second set of operational values to be obtained from the first or the second one or more physical sites. A fifth one or more MEEEs executing in the compute fabric to thereby provide a user interface via which users graphically configure a dashboard graphical user interface (GUI) to display at least a portion of the first or the second set of operational values. See also Nixon at ¶ [0435]: A first one or more application programming interfaces (APIs) via which the first one or more MEEEs are accessed to obtain a first set of operational parameter values regarding a first one or more operations of the one or more process control or automation systems of the enterprise at a first one or more physical sites. A fleet management application executing in the compute fabric, the fleet management application configured to utilize the first and the second APIs to thereby obtain the first and the second sets of operational parameter values.) - determining a second operational condition state of the second site based on the second site data (see at least Nixon: Figs. 10F-10G & ¶ [0332] & ¶ [0411]. Nixon notes that the user may specify or define thresholds against which the health of a corresponding entity is to be evaluated, e.g., to indicate the entity as being in good health (z.e., adequate operational state) when a value is above (or below) a threshold, or provide a notification or alert regarding the health of the corresponding entity when the value is below (or above) the threshold. Such health indicators may be configured for any of the physical or logical entities described in the foregoing sections, in various embodiments, and/or for any other physical or logical entity monitored via the NGPCAS using any standard {e.g., DeltaV, APL, HART, WirelessHART, Foundation Fieldbus, OPC UA, etc.). See Diagnostic conditions shown at 1060 of Fig. 10F and site conditions Shown in Fig. 10G for site 1 with system integrity of Good and networking of good and site 2 with system integrity of uncertain and networking of good. See also Nixon at ¶ [0332]: “An enterprise-level diagnostics functionality to provide a real-time view of high-level diagnostics of two or more physical sites operated by the enterprise (e.g., indicating whether processes at each site are online, whether site networks are in normal operational states, etc.).”). - comparing the second operational condition state of the second site to a second operational condition target (see at least Nixon: Figs. 10F-10G & ¶ [0332-0334]. Nixon teaches that a site comparison functionality to enable a user to compare configurations and performance parameters of two or more sites of the enterprise (e.g., to compare operation of similar-process equipment in multiple sites to determine whether differences in site configurations may be associated with different observed performance parameters of the multiple sites). See also Nixon at ¶ [0410]: “Fleet management application(s) may include dashboards or other visualizations to compare the operation of processes implemented via two or more other systems and/or sites, e.g. to compare product production quantity and/or quality, process safety or efficiency metrics, or other process information among the two or more systems/sites.”). - providing a second operational adjustment command to the second site management controller based on the comparison of the second operational condition state of the first site to the second operational condition target, wherein the second operational adjustment command is configured to cause execution of a second operational adjustment at the second site (see at least Nixon: ¶ [0132-0133] & ¶ [0154] & ¶ [0223] & Fig. 10F-10G. Nixon teaches that the operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric). See also Nixon at ¶ [0132-0133] & Fig. 1B noting “the locations 115A and 118A may have distributed controllers.” See also Nixon at ¶ [0223]: The software defined networking layer 410 automatically, dynamically, and responsively determines, initiates, and performs changes to the allocation of hardware and software resources of the nodes Ny of the computing platform 405 to different application layer software components 412 based on detected conditions, such as improvement in performance of individual logical and/or physical components or groups thereof, degradation of performance of individual logical and/or physical components or groups thereof, fault occurrences, failures of logical and/or physical components, configuration changes (e.g., due to user commands or due to automatic re-configuration by services of the compute fabric 400).) - wherein the second operational adjustment command is different than the first operational adjustment command (see at least Nixon: Fig. 2B & ¶ [0154] & ¶ [0223]. Nixon notes that that is, as performance, resource needs, and configurations of the various application layer services, subsystems, and other software components of the application layer 412 dynamically change (and/or are dynamically predicted, by services within the application layer 412, to change), the operating system 410 may automatically and responsively adjust and/or manage the usage of hardware and/or software resources of the physical layer 405 to support the needs and the requirements of the application layer 412 for computing, storage, and networking, as well as for other functionalities related to industrial process control and automation.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Karl system for centralized operations management for centralized operations management for a plurality of sites with the aforementioned teachings of: each of these steps shown above, and in view of Nixon, whereby operators or technicians at operator workstations located at first, second, third, etc. physical device locations or even at other locations that are unassociated with any physical device location may monitor and perform operations associated with the run-time of portions of an industrial process (e.g., devices, groups of devices, control loops, etc.) executed at different ones of the first, second, third, etc. physical device locations. The operator or technician may then modify or adjust various aspects of the operations of the process by sending commands from the workstation to modify the aspects of the process (e.g., to advance a step in the process or to modify a control loop in the process, with the control loop potentially itself implemented in the form of a containerized component(s) at a same or different location via the compute fabric) (see at least Nixon: ¶ [0154].). The capabilities the Next Generation Process Control and Automation System NGPCAS, the system provider of the NGPCAS may implement further system monitoring and administrative management functionalities with respect to respective NGPCASs operated by each of one, two, three, four or more enterprises (independently from the enterprises themselves). Generally speaking, centralized monitoring (and/or optimization) functionalities described herein are provided not only for process control functionalities (e.g., operations directly participating in implementation of a process), but also for quality of service and resource management of NGPCAS resources across an entire enterprise (see at least Nixon: ¶ [0373].). Further, the claimed invention is merely a combination of old elements in a similar field for centralized operations management for a plurality of sites and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Nixon, the results of the combination were predictable. Regarding Dependent Claims 2, 9 and 16, Karl / Nixon method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites teaches the limitations of Independent Claims 1, 8 and 15 above, and Karl further teaches the method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites comprising: - configuring a third site management controller of a third site of the plurality of sites, the third site including a plurality of third site tools controlled by the third site management controller to (see at least Karl: Fig. 3 & ¶ [0050-0053] & ¶ [0079]. Karl teaches that the system may generate a different dashboard and mini-card for each lab metric of the user and its facilities/sites. See also Karl at ¶ [0050]: The scorecard may include a key performance indicator (KPI) portion 302 that lists each metric and a select facility portion 310 that allows the user to change facilities/sites (for any facilities whose data the user is authorized to access), and regenerate the scorecard with the KPIs for the selected facility wherein the selected facility/site may be (for example, a specified hospital). See also Karl at ¶ [0079]: Some other possibilities for implementing aspects include: memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc.); - provide third site APIs to the third site (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) to obtain data from the plurality of third site tools (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”); - obtain third site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) from the third site APIs (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - translate the third site data (see at least Karl: ¶ [0039]: Karl teaches that the acquire module 104A may also translate the data from each data source into a common data format used by the store 104C of the system. The acquire module 104A may also acquire a direct copy of the source data in tabular format. The system may have separate database for the storage of the data from each data source. The acquire module 104A may also process data in the well-known format into a common format, such as by using a commercially available tool from Carista (www.caristaapp.com).) from the third site APIs to the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) - export the third site data (see at least Karl: ¶ [0037-0040] & ¶ [0046] & ¶ [0050-0051]. Karl teaches that each of the modules 104A, 104B and 104D may be alternatively implemented in hardware and each module may be a hardware device that performs the operations and functions of each module as described below. A store 104C of the backend component 104 may be storage for system data, business logic that is part of the conform module 104B that is used to process the incoming healthcare data, the logic used by the analytics module that performs the analysis of the healthcare data and generates the outputs screens and the dashboards, the incoming healthcare data and the conformed healthcare data stored by the system. See also Karl at ¶ [0046]: “The output screens and dashboards generated by the system may be downloaded and displayed on one or more computing devices 106 that can access the backend 104 over a communications path such as a computer network, the Internet, Ethernet, a wireless network and the like. In one implementation, each computing device may execute a well-known browser application and communicate/exchange data with the backend 104 using a known HTML protocol and the backend 104 may have a web server that generates the HTML code that may be sent to each computing device.). - wherein the third site APIs are selected such that (see at least Karl: ¶ [0038-0039].) the third site data is converted to a format compatible with the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) are converted to a format compatible with the specified API (see at least Karl: ¶ [0038-0039]. Karl teaches that the acquire module may include, for example, an application programming interface (“API”) that allows the acquire module to gather the information from each of the healthcare data sources. The acquire module may have other mechanisms to gather/poll each healthcare data source and ingest the healthcare data for that particular healthcare data source (with its possibly unique healthcare data format) into the system for processing and analysis. See also Karl at ¶ [0039]: As new healthcare data sources are going to be ingested into the system, the acquire module 104A may be supplemented with new APIs or other mechanism to be able to retrieve the healthcare data for the new healthcare data source with its own unique data format.) as the first site data and the second site data (see at least Karl: ¶ [0050-0053] & (Claim 1 of Karl). Karl notes that the scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. Color-coding the results provides an immediate visual about which metrics the organization is achieving or falling short of performance expectations in any area. The user will accordingly know which areas require additional attention, and the user can use the dashboard to drill into more detail to further investigate issues and conduct root-cause analyses. From the scorecard shown in FIG. 3, the system may provide the user with a navigation menu that allows the user to navigate to the various performance results of the lab or labs that are owned/controlled/managed by the particular user. See also Claim 1 of Karl reference noting: “Generating, using the plurality of healthcare data metrics, a plurality of dashboards wherein each dashboard displays a particular one of the calculated healthcare data metrics in a color that indicates whether the target for the particular one of the calculated healthcare data metric set forth in the rule was met and each dashboard is active and permits a drill down of the conformed healthcare data that is used to generate the dashboard.”) Examiner Note: Examiner points to MPEP § 2112: “Where applicant claims a composition in terms of a function, property or characteristic and the composition of the prior art is the same as that of the claim but the function is not explicitly disclosed by the reference, the examiner may make a rejection under both 35 U.S.C. 102 and 103. “There is nothing inconsistent in concurrent rejections for obviousness under 35 U.S.C. 103 and for anticipation under 35 U.S.C. 102.” In re Best, 562 F.2d 1252, 1255 n.4, 195 USPQ 430, 433 n.4 (CCPA 1977). This same rationale should also apply to product, apparatus, and process claims claimed in terms of function, property or characteristic. Therefore, a 35 U.S.C. 102 and 103 rejection is appropriate for these types of claims as well as for composition claims.” Alternatively, the analysis is viewed through the lens of a person of ordinary skill in the art at the time of the invention. If the description of the prior art (e.g., a "plurality of sites") implicitly conveys the existence of "first," "second," and "third" sites in a way that is clear to this person, the limitations are inherent. Examiner notes that the Karl reference discloses or teaches multiple site selections, referring to a first site and second site, although explicitly doesn’t recite “a third site”. Furthermore, the Karl reference teaches one or more application programming interfaces (e.g., one or more APIs) at each site, although explicitly doesn’t recite “a third API”. Karl reference also teaches translating the first site data and the second site data (e.g., where the first and second site data is referred to a performance metrics or performance indicators/values) with a specified API in a common or specified format. Additionally, the Karl reference in view of the Nixon reference are in the analogous field of art or field of use of “benchmarking” analysis of a plurality of sites at different site/facility (e.g., in this case, hospital or clinic) locations. Therefore, it is reasonable to conclude that a person of one of ordinary skill in the art that the Karl reference in view of the Nixon reference in a 35 U.S.C. 103 rejection rationale provides a reasonable fact and/or technical reasoning to support the determination that the allegedly inherent characteristics of Dependent Claims 2, 9 and 16 necessarily flows from the teachings of the applied prior art.” Ex parte Levy, 17 USPQ2d 1461, 1464 (Bd. Pat. App. & Inter. 1990) (emphasis in original). Regarding Dependent Claims 3, 10 and 17, Karl / Nixon method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites teaches the limitations of Independent Claims 1, 8 and 15 above, and Karl further teaches the method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites comprising: - wherein the first site data includes operational metrics of the first site and the second site data includes operational metrics of the second site, and the method further comprises generating a first site score for the first site and a second site score for the second site (see at least Karl: Fig. 3 & ¶ [0029] & ¶ [0050-0052]. Karl notes that select facility portion 310 that allows the user to change facilities/sites (for any facilities whose data the user is authorized to access), and regenerate the scorecard with the KPIs for the selected facility wherein the selected facility/site may be (for example, a specified hospital). In the scorecard, each KPI may have text below the KPI name indicating a target value for the particular KPI. In the scorecard, the color of the text for the YTD portion 304 and the last month portion 306 may change depending on the value indicated by the text (the performance on that metric. For example, the text may be green if the facility/site achieved the target for the particular KPI, the text may be yellow if the facility/site is within 15% of the target for the KPI and the text may be red if the facility/site does not meet the target. The scorecard may provide users with a high-level easy-to-read overview of system or individual facility performance. See also Karl at ¶ [0029]: The system and method may be used to generate insightful visuals of key performance metrics and indicators for stakeholders in various healthcare related entities, such as hospitals, health systems and independent laboratories, to drive efficiency and monitor outcomes of performance (including productivity, quality, service, and cost) and process improvement. See also Karl at ¶ [0052]: in this case of measure of the STAT TAT metric. Second, the mini-card acts as navigation buttons to more detailed content. Note the blue highlight on the second mini-card which indicates the active dashboard below it. The system may generate a different dashboard and mini-card for each lab metric of the user and its facilities/sites. See also Karl at Fig. 5 and Figs. 6A-6B.) 13. Claim 4, 6-7, 11, 13-14, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US PG Pub (US 2020/0365274 A1) hereinafter Karl, et. al., in view Foreign Patent Application (WO 2024/086015 A1) hereinafter Nixon, et. al., and in further view of US PG Pub (US 2014/0236668 A1) hereinafter Young, et. al. Regarding Dependent Claims 4, 11 and 18, Karl / Nixon method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites does not explicitly disclose, but Young in the analogous art for a method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites does disclose the following: - further comprising generating a site score for each of the plurality of sites (see at least Young: ¶ [0023] & ¶ [0027-0028] & Figs. 2A-2D. Young notes that the development of metrics concerning clinical trial conduct at the site level, and these metrics may be grouped, normalized, or aggregated in order to determine an associated risk level and to utilize such risk levels to determine associated site-level quality scores, in order to more quickly identify trends in the data and to identify areas, personnel, and/or organizations that may require more in-depth review. Other embodiments may aggregate site-level data geographically, such as at a city level, state level, regional level, country level, or worldwide study level (i.e., across all sites within a given study). See also Young at ¶ [0027-0028]: FIG. 2B shows how site-level quality scores and risk indicators may be determined, and FIGS. 2C and 2D show how quality scores and risk indicators above the site level may be determined. Site-level analysis block 201 may take study data 60 and historic data 70 and may determine site-level quality scores, site-metric risk indicators, and site quality risk indicators. Site-level analysis block 201 is partially schematically illustrated in FIG. 2B. Multiple-site analysis block 202 may take site-metric risk indicators from site-level analysis block 201 and may determine multiple-site quality scores and multiple-site risk indicators.), aggregating the site scores of the plurality of sites (see at least Young: Figs. 4A-4C & ¶ [0023] & ¶ [0044]. Young notes that the development of metrics concerning clinical trial conduct at the site level, and these metrics may be grouped, normalized, or aggregated in order to determine an associated risk level and to utilize such risk levels to determine associated site-level quality scores, in order to more quickly identify trends in the data and to identify areas, personnel, and/or organizations that may require more in-depth review. Other embodiments may aggregate site-level data geographically, such as at a city level, state level, regional level, country level, or worldwide study level (i.e., across all sites within a given study). See also Young at ¶ [0044]: Referring to FIG. 2B, after all of the metrics for a site (e.g., site 112) have been grouped, normalized, or aggregated into site-metric risk indicators (e.g., R1-R5), an aggregator such as aggregator 241 aggregates site-metric risk indicators R1-R5 into a site-level quality score 281 using an aggregation algorithm such as aggregation algorithm 2401. It is desirable that all the site-metric risk indicators that are aggregated to calculate a site-level quality score be determined using the same number of risk regions, e.g., R1-R5 all have L, M, or H designations, so that the aggregation may be performed using comparable metrics. See also Young at Fig. 4A at step 435 noting: “Aggregating normalized site-metric risk indicators into site-level quality score.”) to provide a normalized benchmark score that is based on an average of the site scores of each of the plurality of sites (see at least Young: Figs. 3A-3C & ¶ [0034] & ¶ [0041] & ¶ [0048]. Young teaches that each metric risk profiler 231-235 receives a value associated with metrics M1-M5 and, using a metric risk profile which is typically specific to each metric risk profiler 231-235, may calculate or normalize or categorize the metric into a site-metric risk indicator, e.g., R1-R30 (metric risk indicators R6-R25 are not shown in FIG. 2B). See also Young at ¶ [0041]: The metric risk profiles shown in FIGS. 3A-3C comprise a benchmark and risk regions that can be determined in relation to deviations from the benchmark. (In this context, a region of no risk may be considered a “risk region.”) The benchmark may be a historic mean or median, an industry mean or median, a study mean or median, or combinations of these. See also Young at ¶ [0048]: The metric values are then available to metric risk profilers 231-235 to normalize the metrics into a site-metric risk indicator (e.g., low, medium, or high) R26-R30. See also Young at Figs. 3A-3C & Fig. 5. See also Young at Fig. 4B step 460 -> “Normalize metric values based on metric1 risk profile” and step 465 -> “Aggregating Normalized site-metric risk indicators into quality score for multiple sites for metric1.” See also Fig. 4C step 480 -> “Normalize metric values based on metric risk profiles” and Fig. 4C step 485 -> “aggregate normalized site-metric risk indicators into overall quality score for multiple sites”.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Karl / Nixon method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites with the aforementioned teachings of: further comprising generating a site score for each of the plurality of sites, aggregating the site scores of the plurality of sites to provide a normalized benchmark score that is based on an average of the site scores of each of the plurality of sites, and in further view of Young, whereby the systems of Young may be used to normalize data site metrics and combine a plurality of the metrics to determine a site-level quality score. Normalization may be accomplished by applying metric risk profiles to the metrics. These methods allow clinical trial administrators to review data from multiple clinical trials and clinical trial sites and determine at a glance whether a trial site may be risky or may be providing bad data. Such problem sites can then be addressed as quickly and as efficiently as possible. These methods reduce the cost of monitoring a clinical trial because they focus the monitor's attention on those sites that may not be performing as well as needed (see at least Young: ¶ [0102].) Further, the claimed invention is merely a combination of old elements in a similar field for centralized operations management for a plurality of sites and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Young, the results of the combination were predictable. Regarding Dependent Claims 6, 13 and 20, Karl / Nixon / Young method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites teaches the limitations of Claims 1, 3-4, 8, 10-11, 15 and 17-18 above, and Young further teaches the method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites comprising: - wherein the operational metrics include performance indicators based on operations, events, and/or tasks (see at least Young: Figs. 2A-2D & ¶ [0015] & ¶ [0026-0028] & ¶ [0031]. Young notes that the data may be associated with a clinical trial for a drug or medical device, or may be any other type of data that can assess operation at a site, including but not limited to performance of sales associates at different locations or performance of retail locations, including franchises. As will be described in more detail below, risk assessment apparatus 10 may calculate one or more quality scores 80 and risk indicators 90 based on study data 60, received from the various sites, and historic or industry or other data 70 received from any source of data. Quality scores 80 and risk indicators 90 may then be used to evaluate each site, groupings of sites, or a study as a whole. See also Young at ¶ [0002]: A number of similar sites geographically distributed that perform similar types of tasks. Examples of these systems are franchise systems, sales offices of a company, and clinical drug trials. It may be desirable to monitor data generated at these sites to ensure uniformity of operation and integrity of the data. Such quality monitoring may be performed on site or remotely. See also Young at ¶ [0015]: The systems and methods disclosed herein may be used in or with clinical drug or device trials, monitoring of sales operations and associates, monitoring of retail services and locations, and other data-intensive applications in which users may desire to assess quickly the quality of data coming from a variety of sources. For example, it may be appreciated that the present invention could be utilized in sales, retail, or franchise organizations, wherein the quality of data generated by remote offices or individuals in compliance or conjunction with a centralized office or rules could be monitored or assessed. See also Young at ¶ [0028]: Multiple-site analysis block 202 may take site-metric risk indicators from site-level analysis block 201 and may determine multiple-site quality scores and multiple-site risk indicators. See also Young at ¶ [0031]: The metrics could include query rate, subject visit to entry cycle time, query response cycle time, screen failure rate, early termination rate, adverse event (AE) rate, severe adverse event (SAE) rate, protocol deviation rate, and/or visit schedule deviation rate, as well as other metrics which may be appreciated by a person of ordinary skill in the art.). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Karl / Nixon / Young method / system / non-transitory computer-readable medium for centralized operations management for a plurality of sites with the aforementioned teachings of: wherein the operational metrics include performance indicators based on operations, events, and/or tasks, and in further view of Young, whereby the systems of Young may be used to normalize data site metrics and combine a plurality of the metrics to determine a site-level quality score. Normalization may be accomplished by applying metric risk profiles to the metrics. These methods allow clinical trial administrators to review data from multiple clinical trials and clinical trial sites and determine at a glance whether a trial site may be risky or may be providing bad data. Such problem sites can then be addressed as quickly and as efficiently as possible. These methods reduce the cost of monitoring a clinical trial because they focus the monitor's attention on those sites that may not be performing as well as needed (see at least Young: ¶ [0102].) Further, the claimed invention is merely a combination of old elements in a similar field for centralized operations management for a plurality of sites and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Young, the results of the combination were predictable. Regarding Dependent Claims 7 and 14, Karl / Nixon / Young method / system for centralized operations management for a plurality of sites teaches the limitations of Claims 1, 3-4, 6, 8, 10-11 and 13 above, and Young further teaches the method / system for centralized operations management for a plurality of sites comprising: - wherein the operational metrics of each of the plurality of sites (see at least Young: Figs. 2A-2D & ¶ [0015] & ¶ [0026-0028] & ¶ [0031]. Young notes that the data may be associated with a clinical trial for a drug or medical device, or may be any other type of data that can assess operation at a site, including but not limited to performance of sales associates at different locations or performance of retail locations, including franchises. As will be described in more detail below, risk assessment apparatus 10 may calculate one or more quality scores 80 and risk indicators 90 based on study data 60, received from the various sites, and historic or industry or other data 70 received from any source of data. Quality scores 80 and risk indicators 90 may then be used to evaluate each site, groupings of sites, or a study as a whole. See also Young at ¶ [0002]: A number of similar sites geographically distributed that perform similar types of tasks. Examples of these systems are franchise systems, sales offices of a company, and clinical drug trials. It may be desirable to monitor data generated at these sites to ensure uniformity of operation and integrity of the data. Such quality monitoring may be performed on site or remotely. See also Young at ¶ [0015]: The systems and methods disclosed herein may be used in or with clinical drug or device trials, monitoring of sales operations and associates, monitoring of retail services and locations, and other data-intensive applications in which users may desire to assess quickly the quality of data coming from a variety of sources. For example, it may be appreciated that the present invention could be utilized in sales, retail, or franchise organizations, wherein the quality of data generated by remote offices or individuals in compliance or conjunction with a centralized office or rules could be monitored or assessed. See also Young at ¶ [0028]: Multiple-site analysis block 202 may take site-metric risk indicators from site-level analysis block 201 and may determine multiple-site quality scores and multiple-site risk indicators. See also Young at ¶ [0031]: The metrics could include query rate, subject visit to entry cycle time, query response cycle time, screen failure rate, early termination rate, adverse event (AE) rate, severe adverse event (SAE) rate, protocol deviation rate, and/or visit schedule deviation rate, as well as other metrics which may be appreciated by a person of ordinary skill in the art.) is measured based on a worker at each of the plurality of sites, a team at each of the plurality of sites, and/or an area of each of the plurality of sites (see at least Young: ¶ [0061-0063] & Figs. 5-6. Young teaches that the operational metrics of each of the plurality of sites are measured based on a geographic location/ geographic area of each of the plurality of sites which is demonstrated via Figs. 5-6. See also Young at ¶ [0043]: “Determining site-metric risk indicators for individual sites, embodiments of the present invention can determine the risk indicator for groups of sites, such as, for example, a city, region, state, country, continent, or study as a whole, or other groupings of sites not related to geography.” See also Young at ¶ [0058]: “Higher-level” in this case refers to groupings or geographies that are more inclusive than site level, e.g., regional level, state level, country level, study level, etc. Groupings by geographic region are illustrative, but any relevant groupings having a common element may apply.). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Karl / Nixon / Young method / system for centralized operations management for a plurality of sites with the aforementioned teachings of: wherein the operational metrics of each of the plurality of sites is measured based on a worker at each of the plurality of sites, a team at each of the plurality of sites, and/or an area of each of the plurality of sites, and in further view of Young, whereby the systems of Young may be used to normalize data site metrics and combine a plurality of the metrics to determine a site-level quality score. Normalization may be accomplished by applying metric risk profiles to the metrics. These methods allow clinical trial administrators to review data from multiple clinical trials and clinical trial sites and determine at a glance whether a trial site may be risky or may be providing bad data. Such problem sites can then be addressed as quickly and as efficiently as possible. These methods reduce the cost of monitoring a clinical trial because they focus the monitor's attention on those sites that may not be performing as well as needed (see at least Young: ¶ [0102].) Further, the claimed invention is merely a combination of old elements in a similar field for centralized operations management for a plurality of sites and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Young, the results of the combination were predictable. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DERICK HOLZMACHER whose telephone number is (571) 270-7853. The examiner can normally be reached on Monday-Friday 9:00 AM – 6:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached on 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-270-8853. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /DERICK J HOLZMACHER/ Patent Examiner, Art Unit 3625A /BRIAN M EPSTEIN/Supervisory Patent Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
May 30, 2025
Non-Final Rejection — §101, §103
Aug 19, 2025
Response Filed
Dec 06, 2025
Final Rejection — §101, §103
Feb 17, 2026
Response after Non-Final Action
Mar 11, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Mar 17, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586015
RESOURCE-RELATED FORECASTING USING MACHINE LEARNING TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12561708
SYSTEMS AND METHODS FOR PREDICTING CHURN IN A MULTI-TENANT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12499404
SYSTEM AND METHOD FOR QUALITY PLANNING DATA EVALUATION USING TARGET KPIS
2y 5m to grant Granted Dec 16, 2025
Patent 12493838
Translation Decision Assistant
2y 5m to grant Granted Dec 09, 2025
Patent 12450541
SYSTEMS AND METHODS FOR PROVIDING TIERED SUBSCRIPTION DATA STORAGE IN A MULTI-TENANT SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
73%
With Interview (+28.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 270 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month