Prosecution Insights
Last updated: April 19, 2026
Application No. 17/942,651

HIERACHICAL BUILDING PERFORMANCE DASHBOARD WITH KEY PERFORMANCE INDICATORS ALONGSIDE RELEVANT SERVICE CASES

Final Rejection §101§103
Filed
Sep 12, 2022
Examiner
DIVELBISS, MATTHEW H
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Honeywell International Inc.
OA Round
4 (Final)
23%
Grant Probability
At Risk
5-6
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
83 granted / 367 resolved
-29.4% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
50 currently pending
Career history
417
Total Applications
across all art units

Statute-Specific Performance

§101
37.0%
-3.0% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§101 §103
DETAILED ACTION The following is a Final Office action. In response to Examiner’s communication of 5/1/25, Applicant, on 7/29/25, amended claims 1, 19, 20, 24, 27, and 30. Claims 1-30 are now pending and have been rejected as indicated below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Applicant’s amendments are acknowledged. The 35 USC 101 rejection of claims 1-30 in regard to abstract ideas has been maintained in light of Applicant’s amendments and explanations. Revised 35 USC § 103 rejections of claims 1-30 are applied in light of Applicant’s amendments and explanations. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for determining impact of service cases on indicators for various levels of building components and visualizing the associated metrics analyzed. Examiner formulates an abstract idea analysis, following the framework described in the MPEP, as follows: Step 1: The claims are directed to a statutory category, namely a "method" (claims 1-18 and 24-30) and "system" (claims 19-23). Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1: wherein the building model models two or more hierarchical levels of the building system, wherein a particular hierarchical level includes a parent object associated with the building system, and the building model includes one or more child objects associated with the building system at a next lower hierarchical level and under the parent object; …wherein the operational data comprises at least one of one or more set points and one or more threshold points to indicate desired values for a plurality of parameters associated with the building system; … determining an Occupant Health Performance Indicator for each of the one or more child objects based at least in part on the operational data and the building model using one or more rules applied against the operational data; …identifying one or more service cases impacting the Occupant Health Performance Indicator for each of the one or more child objects using a rules engine and contextual information associated with the two or more building assets; … determining an Occupant Health Performance Indicator for the parent object based at least in part on the Occupant Health Performance Indicators for each of the one or more child objects, wherein the Occupant Health Performance Indicator for the parent object includes an aggregation of the Occupant Health Performance Indicators for each of the one or more child objects; … displaying the Occupant Health Performance Indicator for the parent object along with one or more first service cases related to at least one building asset associated with the parent object having the Occupant Health Performance Indicators below a threshold to assist in control of one or more operations of the at least one building asset associated with the parent object; executing identification of a root cause service case from one or more second service cases associated with the one or more child objects causing degradation in the Occupant Health Performance Indicator of the parent object wherein the root cause service case is associated with a fault that negatively impacts the Occupant Health Performance Indicators; … displaying the Occupant Health Performance Indicators for each of the one or more child objects along with the one or more second service cases related to the at least one building asset associated with each of the one or more child objects having the Occupant Health Performance Indicators below the threshold to assist in control of the one or more operations of the at least one building asset associated with a child object wherein at least one service case of the one or more second service cases relates to a root cause identified as a negative impact on the Occupant Health Performance Indicator.. Independent claims 19, 20, 24, 27, and 30 recite substantially similar claim language. Dependent claims 2-18, 21-23, 25-26, and 28-29 recite the same or similar abstract idea(s) as independent claims 1, 19, 20, 24, 27, and 30 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea. The limitations in claims 1-30 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of: a. "Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to managing the selection, analysis, and user review of data relating to a building system as part of user management and review of the building performance and service cases and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or b. "Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including user observation and evaluation of performance data and selecting, i.e. providing judgement or opinion, of the particular levels/granularity of data the user desires to review, which is capable of being performed mentally and/or using pen and paper. Step 2A - Prong 2: Claims 1-30 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of: "a display displaying the Occupant Health Performance Indicator for the parent object; and the display displaying the Occupant Health Performance Indicators for each of the one or more child objects. " (claim 1, 18, 20, 23, 27, 30) "after the display displays the Occupant Health Performance Indicator for the parent object, receiving an indication from a user that additional information is desired, and in response, the display displaying the Occupant Health Performance Indicators for each of the one or more child objects" (claims 2, 25, 28), however the aforementioned elements directed to the receiving of user input/selection of data to view via a dashboard and displaying corresponding data via the dashboard merely amount to generic GUI elements of a general purpose computer used to "apply" the abstract idea (MPEP 2106.05(f)) and/or is merely an attempt at limiting the abstract idea of analysis and review/visualization performance metrics to a particular field of use/technological environment of a GUI dashboard (MPEP 2106.05(h)) and therefore the GUI dashboard input and display of data fails to integrate the abstract idea into a practical application; "wherein the cloud server uses machine learning to analyze at least some of the operational data to identify one or more anomaly and/or fault in the operation of the building system" (claim 4) however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of applying machine learning to a generic "building system" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application; "A method comprising: a cloud server storing a building model that models a building system" (claims 1, 18, 20, 23, 27, 30) and "wherein the cloud server uses at least some of the operational data to define the building model" (claim 5), however the receiving of data from these various sources is merely insignificant extra-solution activity, e.g. data gathering, and/or merely an attempt at limiting the abstract idea to a particular field of use and thus fails to integrate the recited abstract idea into a practical application (e.g. MPEP 2106.0S(h): "Examiners should keep in mind that this consideration overlaps with other considerations, particularly insignificant extra-solution activity (see MPEP § 2106.05{g)). For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017} (limiting use of abstract idea to use with XML tags)."); Step 2B: Claims 1-30 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of analysis of a "building system" via a GUI "dashboard", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to determining impact of service cases for various levels of building components and visualizing the associated metrics according to user desired/selected levels of granularity. Claims 1-30 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more. Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis For further authority and guidance, see: MPEP § 2106 https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-30 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2019/0302157 to Vitullo et al. (hereafter referred to as Vitullo) in view of U.S. Patent Application Publication Number 2018/0234328 to Hall et al. (hereafter referred to as Hall). As per claim 1, Vitullo teaches: A method comprising: a cloud server storing a building model that models a building system (Paragraph Number [0242] teaches or as a cloud-based application (e.g., one of remote systems and applications 444); [0282]: as a remote (e.g., cloud-based) analytics system outside BMS 500.; Fig. 6; Paragraph Number [0172] teaches fault detection rules 620 can be defined by a user via a rules editor 624 or received from an external system or device via analytics web service 618.; Paragraph Number [0199] teaches analytics web service 618 receives fault detection rules 620 and reasons 622 from a web-based rules editor 624. For example, if rules editor 624 is a web-based application, analytics web service 618 can receive rules 620 and reasons 622 from rules editor 624. Paragraph Number [0173] teaches fault detection rules 620 can define a fault as a data value above or below a threshold value. As another example, fault detection rules 620 can define a fault as a data value outside a predetermined range of values. The threshold value and predetermined range of values can be based on the type of timeseries data (e.g., meter data, calculated data, etc.), the type of variable represented by the timeseries data (e.g., temperature, humidity, energy consumption, etc.), the system or device that measures or provides the timeseries data (e.g., a temperature sensor, a humidity sensor, a chiller, etc.), and/or other attributes of the timeseries data)). wherein the building model models two or more hierarchical levels of the building system (Describing hierarchical dashboard visualizations of metrics associated with the respective levels of granularity, e.g. Fig. 19: "1922", 26, 34, Paragraph Number [0263] teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902 to show various energy meters 3406 and 3408 located within each of the buildings. For example, the Main Building 2602 is shown to include a floor 3410 (i.e., Floor 1) which includes a "Main Electric Meter" 3406 and a "Main Gas Meter" 3408. Selecting any of the meters 3406-3408 in meter tab 3402 may cause overview dashboard 1900 to display detailed meter data for the selected meter.; Paragraph Number [0254] teaches instead of breaking down the energy-related data by facility, the data may be broken down by individual buildings within the selected facility. For example, Ace Facility 1914 is shown to include a single building 2602 titled "Main Building." When building 2602 is selected, EUI chart 1918 and energy consumption tracker 1922 may display energy consumption data for the selected building 2602. If additional buildings were included in the selected facility 1914, energy-related data for such buildings may also be displayed when the facility 1914 is selected. Paragraph Number [0269] teaches spaces may include, for example, portfolios 3704, facilities 3706-3708, buildings 3710-3712, floors 3714-3716, zones, rooms, or other types of spaces at any level of granularity. Paragraph Number [0306] teaches baseline comparison module 5212 can compare timeseries for an entire facility, a particular building, space, room, zone, meter (both physical meters and virtual meters), or any other level at which timeseries data can be collected, stored, or aggregated). wherein a particular hierarchical level includes a parent object associated with the building system (Paragraph Number [0263] teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902 to show various energy meters 3406 and 3408 located within each of the buildings. For example, the Main Building 2602 is shown to include a floor 3410 (i.e., Floor 1) which includes a "Main Electric Meter" 3406 and a "Main Gas Meter" 3408. Selecting any of the meters 3406-3408 in meter tab 3402 may cause overview dashboard 1900 to display detailed meter data for the selected meter). and the building model includes one or more child objects associated with the building system at a next lower hierarchical level and under the parent object (Paragraph Number [0263] teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902 to show various energy meters 3406 and 3408 located within each of the buildings. For example, the Main Building 2602 is shown to include a floor 3410 (i.e., Floor 1) which includes a "Main Electric Meter" 3406 and a "Main Gas Meter" 3408. Selecting any of the meters 3406-3408 in meter tab 3402 may cause overview dashboard 1900 to display detailed meter data for the selected meter). the cloud server receiving operational data from two or more building assets of the building system over a network (describing fault detection, e.g. Fig. 52, 58; Paragraph Number [0364] teaches may generate a stuck point fault indication 8902 (shown in FIG. 89). Analytics service 524 may display the stuck point fault indication 8902 along with other fault indications in pending faults window 8900. Paragraph Number [0112] teaches FDD layer 416 can use some content of the data stores to identify faults at the equipment level (e.g., specific chiller, specific AHU, specific terminal unit, etc.) and other content to identify faults at component or subsystem levels. Paragraph Number [0317] teaches the fault indications can be stored as timeseries data in local storage 514 or hosted storage 516 or provided to applications 530, client devices 448, and/or remote systems and applications 444). wherein the operational data comprises at least one of one or more set points and one or more threshold points to indicate desired values for a plurality of parameters associated with the building system (Describing hierarchical dashboard visualizations of metrics associated with the respective levels of granularity, e.g. Fig. 19: "1922", 26, 34, Paragraph Number [0263] teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902 to show various energy meters 3406 and 3408 located within each of the buildings. For example, the Main Building 2602 is shown to include a floor 3410 (i.e., Floor 1) which includes a "Main Electric Meter" 3406 and a "Main Gas Meter" 3408. Selecting any of the meters 3406-3408 in meter tab 3402 may cause overview dashboard 1900 to display detailed meter data for the selected meter.; Paragraph Number [0254] teaches instead of breaking down the energy-related data by facility, the data may be broken down by individual buildings within the selected facility. For example, Ace Facility 1914 is shown to include a single building 2602 titled "Main Building." When building 2602 is selected, EUI chart 1918 and energy consumption tracker 1922 may display energy consumption data for the selected building 2602. If additional buildings were included in the selected facility 1914, energy-related data for such buildings may also be displayed when the facility 1914 is selected. Paragraph Number [0269] teaches spaces may include, for example, portfolios 3704, facilities 3706-3708, buildings 3710-3712, floors 3714-3716, zones, rooms, or other types of spaces at any level of granularity. Paragraph Number [0306] teaches baseline comparison module 5212 can compare timeseries for an entire facility, a particular building, space, room, zone, meter (both physical meters and virtual meters), or any other level at which timeseries data can be collected, stored, or aggregated Paragraph Number [0173] teaches fault detection rules 620 can define a fault as a data value above or below a threshold value. As another example, fault detection rules 620 can define a fault as a data value outside a predetermined range of values. The threshold value and predetermined range of values can be based on the type of timeseries data (e.g., meter data, calculated data, etc.), the type of variable represented by the timeseries data (e.g., temperature, humidity, energy consumption, etc.), the system or device that measures or provides the timeseries data (e.g., a temperature sensor, a humidity sensor, a chiller, etc.), and/or other attributes of the timeseries data.)). using one or more rules applied against the operational data (Paragraph Number [0172] teaches meter fault detector 614 and scalable rules engine 606 receive fault detection rules 620 and/or reasons 622 from analytics service 618. Fault detection rules 620 can be defined by a user via a rules editor 624 or received from an external system or device via analytics web service 618. In various embodiments, fault detection rules 620 and reasons 622 can be stored in rules database 632 and reasons database 634 within local storage 514 and/or rules database 640 and reasons database 642 within hosted storage 516. Meter fault detector 614 and scalable rules engine 606 can retrieve fault detection rules 620 from local storage 514 or hosted storage and use fault detection rules 620 to analyze the timeseries data. Paragraph Number [0194] teaches scalable rules engine 606 can retrieve any timeseries from timeseries storage 515 and generate fault detection timeseries using the retrieved data timeseries. Scalable rules engine 606 can apply fault detection rules to the timeseries data to determine whether each sample of the timeseries data qualifies as a fault). using a rules engine and contextual information associated with the two or more building assets (Paragraph Number [0111] teaches FDD layer 416 can be configured to output a specific identification of the faulty component or cause of the fault (e.g., loose damper linkage) using detailed subsystem inputs available at building subsystem integration layer 420. In other exemplary embodiments, FDD layer 416 is configured to provide “fault” events to integrated control layer 418 which executes control strategies and policies in response to the received fault events. According to an exemplary embodiment, FDD layer 416 (or a policy executed by an integrated control engine or business rules engine) can shut-down systems or direct control activities around faulty devices or systems to reduce energy waste, extend equipment life, or assure proper control response). a display displaying the Occupant Health Performance Indicator for the parent object (Describing hierarchical dashboard visualizations of metrics associated with the respective levels of granularity, e.g. Fig. 19: "1922", 26, 34, Paragraph Number [0263] teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902 to show various energy meters 3406 and 3408 located within each of the buildings. For example, the Main Building 2602 is shown to include a floor 3410 (i.e., Floor 1) which includes a "Main Electric Meter" 3406 and a "Main Gas Meter" 3408. Selecting any of the meters 3406-3408 in meter tab 3402 may cause overview dashboard 1900 to display detailed meter data for the selected meter.; Paragraph Number [0254] teaches instead of breaking down the energy-related data by facility, the data may be broken down by individual buildings within the selected facility. For example, Ace Facility 1914 is shown to include a single building 2602 titled "Main Building." When building 2602 is selected, EUI chart 1918 and energy consumption tracker 1922 may display energy consumption data for the selected building 2602. If additional buildings were included in the selected facility 1914, energy-related data for such buildings may also be displayed when the facility 1914 is selected. Paragraph Number [0269] teaches spaces may include, for example, portfolios 3704, facilities 3706-3708, buildings 3710-3712, floors 3714-3716, zones, rooms, or other types of spaces at any level of granularity. Paragraph Number [0306] teaches baseline comparison module 5212 can compare timeseries for an entire facility, a particular building, space, room, zone, meter (both physical meters and virtual meters), or any other level at which timeseries data can be collected, stored, or aggregated). to assist in control of one or more operations of the at least one building asset associated with the parent object (Paragraph Number [0072] teaches airside system 130 can deliver the airflow supplied by AHU 106 (i.e., the supply airflow) to building 10 via air supply ducts 112 and can provide return air from building 10 to AHU 106 via air return ducts 114. In some embodiments, airside system 130 includes multiple variable air volume (VAV) units 116. For example, airside system 130 is shown to include a separate VAV unit 116 on each floor or zone of building 10. VAV units 116 can include dampers or other flow control elements that can be operated to control an amount of the supply airflow provided to individual zones of building 10. In other embodiments, airside system 130 delivers the supply airflow into one or more zones of building 10 (e.g., via supply ducts 112) without using intermediate VAV units 116 or other flow control elements. AHU 106 can include various sensors (e.g., temperature sensors, pressure sensors, etc.) configured to measure attributes of the supply airflow. AHU 106 can receive input from sensors located within AHU 106 and/or within the building zone and can adjust the flow rate, temperature, or other attributes of the supply airflow through AHU 106 to achieve setpoint conditions for the building zone. (See also Paragraph Number [0083])). the display displaying the Occupant Health Performance Indicators for each of the one or more child objects (Describing hierarchical dashboard visualizations of metrics associated with the respective levels of granularity, e.g. Fig. 19: "1922", 26, 34, Paragraph Number [0263] teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902 to show various energy meters 3406 and 3408 located within each of the buildings. For example, the Main Building 2602 is shown to include a floor 3410 (i.e., Floor 1) which includes a "Main Electric Meter" 3406 and a "Main Gas Meter" 3408. Selecting any of the meters 3406-3408 in meter tab 3402 may cause overview dashboard 1900 to display detailed meter data for the selected meter.; Paragraph Number [0254] teaches instead of breaking down the energy-related data by facility, the data may be broken down by individual buildings within the selected facility. For example, Ace Facility 1914 is shown to include a single building 2602 titled "Main Building." When building 2602 is selected, EUI chart 1918 and energy consumption tracker 1922 may display energy consumption data for the selected building 2602. If additional buildings were included in the selected facility 1914, energy-related data for such buildings may also be displayed when the facility 1914 is selected. Paragraph Number [0269] teaches spaces may include, for example, portfolios 3704, facilities 3706-3708, buildings 3710-3712, floors 3714-3716, zones, rooms, or other types of spaces at any level of granularity. Paragraph Number [0306] teaches baseline comparison module 5212 can compare timeseries for an entire facility, a particular building, space, room, zone, meter (both physical meters and virtual meters), or any other level at which timeseries data can be collected, stored, or aggregated). to assist in control of the one or more operations of the at least one building asset associated with a child object (Paragraph Number [0072] teaches airside system 130 can deliver the airflow supplied by AHU 106 (i.e., the supply airflow) to building 10 via air supply ducts 112 and can provide return air from building 10 to AHU 106 via air return ducts 114. In some embodiments, airside system 130 includes multiple variable air volume (VAV) units 116. For example, airside system 130 is shown to include a separate VAV unit 116 on each floor or zone of building 10. VAV units 116 can include dampers or other flow control elements that can be operated to control an amount of the supply airflow provided to individual zones of building 10. In other embodiments, airside system 130 delivers the supply airflow into one or more zones of building 10 (e.g., via supply ducts 112) without using intermediate VAV units 116 or other flow control elements. AHU 106 can include various sensors (e.g., temperature sensors, pressure sensors, etc.) configured to measure attributes of the supply airflow. AHU 106 can receive input from sensors located within AHU 106 and/or within the building zone and can adjust the flow rate, temperature, or other attributes of the supply airflow through AHU 106 to achieve setpoint conditions for the building zone. (See also Paragraph Number [0083])). Vitullo fails to clearly articulate impact of the service cases at a particular level comprises an aggregation of service case(s) impacting a lower level and displaying the service cases/faults at the respective levels of granularity, however, this feature is taught by the following citations from Hall: the cloud server determining an Occupant Health Performance Indicator for each of the one or more child objects based at least in part on the operational data and the building model (Fig. 1B-3; Abstract: and comparing the performance data for the machine to one or more predefined performance thresholds for the machine to determine a health status for the machine; Paragraph Number [0041] teaches the listing of recent events 210 may include performance information 204 relating to events for the listed machines that can be indicative of a machine's performance (e.g., transitions of metric scores across a threshold value, from one state to another).; Paragraph Number [0058] teaches if a metric Memory Available metric for the EXSRV.115 server transitions from acceptable range into a critical range and the corresponding composite machine score changes to a critical range, the display of the system-level dashboard 200 may be updated dynamically such that the plot 218, the composite machine score 216 and the change value 220 for of entry 212 for the EXSRV.115 server are updated to reflect the new composite machine score; Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions.; Paragraph Number [0047] teaches they may be calculated or selected individually by a user, for example, based on recommended thresholds. For example, the Memory Available may have a first (critical) threshold at 30 MB (24%) (e.g., a metric score of 24) and a second (moderate) threshold at 100 MB {80%) (e.g., a metric score of 90). A metric score above 90 may indicate that the Memory Available is in an "acceptable" or "OK" state, a metric score from 24-89 may indicate that the Memory Available is in an "moderate" state, and metric score below 24 may indicate that the Memory Available is in a "critical" state.... The traffic light 238 for a threshold event may correspond to the status of the corresponding machine metric as a result of the event. For example, the traffic light 238 for the EXSRV.123/ActiveSyncService event 230 may be red in color because the event included a transition to a "critical" status (e.g., memory below 20%)). the cloud server identifying one or more service cases impacting the Occupant Health Performance Indicator for each of the one or more child objects (Fig. 1B-3; Abstract: and comparing the performance data for the machine to one or more predefined performance thresholds for the machine to determine a health status for the machine; Paragraph Number [0041] teaches the listing of recent events 210 may include performance information 204 relating to events for the listed machines that can be indicative of a machine's performance (e.g., transitions of metric scores across a threshold value, from one state to another).; Paragraph Number [0058] teaches if a metric Memory Available metric for the EXSRV.115 server transitions from acceptable range into a critical range and the corresponding composite machine score changes to a critical range, the display of the system-level dashboard 200 may be updated dynamically such that the plot 218, the composite machine score 216 and the change value 220 for of entry 212 for the EXSRV.115 server are updated to reflect the new composite machine score; Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions.; Paragraph Number [0047] teaches they may be calculated or selected individually by a user, for example, based on recommended thresholds. For example, the Memory Available may have a first (critical) threshold at 30 MB (24%) (e.g., a metric score of 24) and a second (moderate) threshold at 100 MB {80%) (e.g., a metric score of 90). A metric score above 90 may indicate that the Memory Available is in an "acceptable" or "OK" state, a metric score from 24-89 may indicate that the Memory Available is in an "moderate" state, and metric score below 24 may indicate that the Memory Available is in a "critical" state.... The traffic light 238 for a threshold event may correspond to the status of the corresponding machine metric as a result of the event. For example, the traffic light 238 for the EXSRV.123/ActiveSyncService event 230 may be red in color because the event included a transition to a "critical" status (e.g., memory below 20%)). the cloud server determining an Occupant Health Performance Indicator for the parent object based at least in part on the Occupant Health Performance Indicators for each of the one or more child objects (Fig. 1B-3; Abstract: and comparing the performance data for the machine to one or more predefined performance thresholds for the machine to determine a health status for the machine; Paragraph Number [0041] teaches the listing of recent events 210 may include performance information 204 relating to events for the listed machines that can be indicative of a machine's performance (e.g., transitions of metric scores across a threshold value, from one state to another).; Paragraph Number [0058] teaches if a metric Memory Available metric for the EXSRV.115 server transitions from acceptable range into a critical range and the corresponding composite machine score changes to a critical range, the display of the system-level dashboard 200 may be updated dynamically such that the plot 218, the composite machine score 216 and the change value 220 for of entry 212 for the EXSRV.115 server are updated to reflect the new composite machine score; Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions.; Paragraph Number [0047] teaches they may be calculated or selected individually by a user, for example, based on recommended thresholds. For example, the Memory Available may have a first (critical) threshold at 30 MB (24%) (e.g., a metric score of 24) and a second (moderate) threshold at 100 MB {80%) (e.g., a metric score of 90). A metric score above 90 may indicate that the Memory Available is in an "acceptable" or "OK" state, a metric score from 24-89 may indicate that the Memory Available is in an "moderate" state, and metric score below 24 may indicate that the Memory Available is in a "critical" state.... The traffic light 238 for a threshold event may correspond to the status of the corresponding machine metric as a result of the event. For example, the traffic light 238 for the EXSRV.123/ActiveSyncService event 230 may be red in color because the event included a transition to a "critical" status (e.g., memory below 20%)). wherein the Occupant Health Performance Indicator for the parent object includes an aggregation of the Occupant Health Performance Indicators for each of the one or more child objects (Fig. 1B-3; Abstract: and comparing the performance data for the machine to one or more predefined performance thresholds for the machine to determine a health status for the machine; Paragraph Number [0041] teaches the listing of recent events 210 may include performance information 204 relating to events for the listed machines that can be indicative of a machine's performance (e.g., transitions of metric scores across a threshold value, from one state to another).; Paragraph Number [0058] teaches if a metric Memory Available metric for the EXSRV.115 server transitions from acceptable range into a critical range and the corresponding composite machine score changes to a critical range, the display of the system-level dashboard 200 may be updated dynamically such that the plot 218, the composite machine score 216 and the change value 220 for of entry 212 for the EXSRV.115 server are updated to reflect the new composite machine score; Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions.; Paragraph Number [0047] teaches they may be calculated or selected individually by a user, for example, based on recommended thresholds. For example, the Memory Available may have a first (critical) threshold at 30 MB (24%) (e.g., a metric score of 24) and a second (moderate) threshold at 100 MB {80%) (e.g., a metric score of 90). A metric score above 90 may indicate that the Memory Available is in an "acceptable" or "OK" state, a metric score from 24-89 may indicate that the Memory Available is in an "moderate" state, and metric score below 24 may indicate that the Memory Available is in a "critical" state.... The traffic light 238 for a threshold event may correspond to the status of the corresponding machine metric as a result of the event. For example, the traffic light 238 for the EXSRV.123/ActiveSyncService event 230 may be red in color because the event included a transition to a "critical" status (e.g., memory below 20%)). along with one or more first service cases related to at least one building asset associated with the parent object having the Occupant Health Performance Indicators below a threshold (Paragraph Number [0041] teaches the listing of recent events 210 may include performance information 204 relating to events for the listed machines that can be indicative of a machine's performance (e.g., transitions of metric scores across a threshold value, from one state to another).; Paragraph Number [0058] teaches if a metric Memory Available metric for the EXSRV.115 server transitions from acceptable range into a critical range and the corresponding composite machine score changes to a critical range, the display of the system-level dashboard 200 may be updated dynamically such that the plot 218, the composite machine score 216 and the change value 220 for of entry 212 for the EXSRV.115 server are updated to reflect the new composite machine score; Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions. (See Vitullo Paragraph Number [0263] in regard to hierarchy and relationships between parent and child components specifically where it teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902)). the cloud server executing identification of a root cause service case from one or more second service cases associated with the one or more child objects causing degradation in the Occupant Health Performance Indicator of the parent object (Paragraph Number [0146] teaches the SPLUNK® ENTERPRISE platform provides various schemas, dashboards and visualizations that make it easy for developers to create applications to provide additional capabilities. One such application is the SPLUNK® APP FOR ENTERPRISE SECURITY, which performs monitoring and alerting operations and includes analytics to facilitate identifying both known and unknown security threats based on large volumes of data stored by the SPLUNK® ENTERPRISE system. This differs significantly from conventional Security Information and Event Management (SIEM) systems that lack the infrastructure to effectively store and analyze large volumes of security-related event data. Traditional SIEM systems typically use fixed schemas to extract data from pre-defined security-related fields at data ingestion time, wherein the extracted data is typically stored in a relational database. This data extraction process (and associated reduction in data size) that occurs at data ingestion time inevitably hampers future incident investigations, when all of the original data may be needed to determine the root cause of a security issue, or to detect the tiny fingerprints of an impending security threat. Paragraph Number [0152] teaches the SPLUNK® ENTERPRISE platform provides various features that make it easy for developers to create various applications. One such application is the SPLUNK® APP FOR VMWARE®, which performs monitoring operations and includes analytics to facilitate diagnosing the root cause of performance problems in a data center based on large volumes of data stored by the SPLUNK® ENTERPRISE system. Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions (See also Paragraph Number [0047])). wherein the root cause service case is associated with a fault that negatively impacts the Occupant Health Performance Indicators (Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions (See also Paragraph Number [0047]). Paragraph Number [0306] teaches baseline comparison module 5212 can compare timeseries for an entire facility, a particular building, space, room, zone, meter (both physical meters and virtual meters), or any other level at which timeseries data can be collected, stored, or aggregated. Paragraph Number [0056] teaches a first-upper dashed line corresponding to a moderate threshold value e.g., “80”), and a second-lower dashed line corresponding to a critical threshold value (e.g., “40”). The threshold markers (or transition markers) 404 can include dots (or other graphical symbols) that signify a location on the plot line 402 when the value was equal to or crossed a threshold value (e.g., where the value transitions from one status range to another status range). For example, in the illustrated embodiment, a first threshold marker 404A may be located at a point on the plot line 402 where the plotted value crosses the moderate threshold value (e.g., “80”), thereby transitioning from a first value range (e.g., an acceptable status or score range) into a second value range (e.g., a moderate status or score range). The second threshold marker 404B may be located at a point on the plot line 402 where the plotted value crosses the critical threshold value (e.g., “40”), thereby transition). along with the one or more second service cases related to the at least one building asset associated with each of the one or more child objects having the Occupant Health Performance Indicators below the threshold (Paragraph Number [0041] teaches the listing of recent events 210 may include performance information 204 relating to events for the listed machines that can be indicative of a machine's performance (e.g., transitions of metric scores across a threshold value, from one state to another).; Paragraph Number [0058] teaches if a metric Memory Available metric for the EXSRV.115 server transitions from acceptable range into a critical range and the corresponding composite machine score changes to a critical range, the display of the system-level dashboard 200 may be updated dynamically such that the plot 218, the composite machine score 216 and the change value 220 for of entry 212 for the EXSRV.115 server are updated to reflect the new composite machine score; Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions. (See Vitullo Paragraph Number [0263] in regard to hierarchy and relationships between parent and child components specifically where it teaches when meter tab 3402 is selected, a user can expand the hierarchy 3404 shown in navigation pane 1902)). wherein at least one service case of the one or more second service cases relates to a root cause identified as a negative impact on the Occupant Health Performance Indicator (Paragraph Number [0146] teaches the SPLUNK® ENTERPRISE platform provides various schemas, dashboards and visualizations that make it easy for developers to create applications to provide additional capabilities. One such application is the SPLUNK® APP FOR ENTERPRISE SECURITY, which performs monitoring and alerting operations and includes analytics to facilitate identifying both known and unknown security threats based on large volumes of data stored by the SPLUNK® ENTERPRISE system. This differs significantly from conventional Security Information and Event Management (SIEM) systems that lack the infrastructure to effectively store and analyze large volumes of security-related event data. Traditional SIEM systems typically use fixed schemas to extract data from pre-defined security-related fields at data ingestion time, wherein the extracted data is typically stored in a relational database. This data extraction process (and associated reduction in data size) that occurs at data ingestion time inevitably hampers future incident investigations, when all of the original data may be needed to determine the root cause of a security issue, or to detect the tiny fingerprints of an impending security threat. Paragraph Number [0152] teaches the SPLUNK® ENTERPRISE platform provides various features that make it easy for developers to create various applications. One such application is the SPLUNK® APP FOR VMWARE®, which performs monitoring operations and includes analytics to facilitate diagnosing the root cause of performance problems in a data center based on large volumes of data stored by the SPLUNK® ENTERPRISE system. Paragraph Number [0083] teaches determining machine status (block 708) can include determining a machine status (e.g., "OK", "moderate", or "critical") based on the composite machine score and the threshold for the composite machine score defined by the machine level definitions (See also Paragraph Number [0047])). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have modified Vitullo's system and method for building performance analysis and hierarchical visualization dashboard that includes fault detection and display, as described above, to further clearly include displaying service cases/faults with respect to the hierarchal level(s) they are determined to have an impact on in view of Hall in order to provide users with the ability to easily and quickly view and identify components affected by a problem and determine the root cause for effective d
Read full office action

Prosecution Timeline

Sep 12, 2022
Application Filed
Sep 17, 2024
Non-Final Rejection — §101, §103
Dec 16, 2024
Response Filed
Jan 15, 2025
Final Rejection — §101, §103
Mar 20, 2025
Response after Non-Final Action
Apr 18, 2025
Request for Continued Examination
Apr 21, 2025
Response after Non-Final Action
Apr 25, 2025
Non-Final Rejection — §101, §103
Jul 29, 2025
Response Filed
Aug 29, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572889
Optimization of Large-scale Industrial Value Chains
2y 5m to grant Granted Mar 10, 2026
Patent 12503000
OPTIMIZATION PROCEDURE FOR THE ENERGY MANAGEMENT OF A SOLAR ENERGY INSTALLATION WITH STORAGE MEANS IN COMBINATION WITH THE CHARGING OF AN ELECTRIC VEHICLE AND SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12493860
WASTE MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Dec 09, 2025
Patent 12482011
FAMILIARITY DEGREE ESTIMATION APPARATUS, FAMILIARITY DEGREE ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Nov 25, 2025
Patent 12450574
METHOD FOR WASTE MANAGEMENT UTILIZING ARTIFICAL NEURAL NETWORK SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
23%
Grant Probability
46%
With Interview (+23.4%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month