DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/9/2025 has been entered.
Claims 1-2, 4, 6, 10-11, 13-14, 17 and 20 are presently amended.
Claim 16 is cancelled.
Claims 1-15 and 17-20 are pending.
Response to Amendment
Applicant’s amendments are acknowledged.
Response to Arguments
Applicant' s arguments filed 12/9/2025 have been fully considered in view of further consideration of statutory law, Office policy, precedential common law, and the cited prior art as necessitated by the amendments to the claims, and are persuasive in-part for the reasons set forth below.
Claim Objections
First, Applicant argues that “Claims 1-10 stand objected to as allegedly indefinite for use of "first machine learning model" and "the machine learning model" in claim 1. Claim 1 has been amended for clarity, as have numerous dependent claims” [Arguments, page 9].
In response, Applicant’s arguments are considered and are persuasive. Examiner observes that the presently amended claims overcome the previously observed claim objection.
35 USC § 101 Rejections
First, Applicant argues that “The instant claims include patentable subject matter under Diamond v. Diehr and the associated USPTO guidance. In Diehr, the Supreme Court found claims directed to automated control of a rubber press machine patent eligible…
The instant claims are comparable to Diehr in that allegedly excluded subject matter (a mathematical equation in Diehr; collection and analysis of data from outside of a physical building in the instant claims) is applied to improve the timing and efficiency of a machine (a rubber press in Diehr; an HVAC system in the instant claims).
Applicant notes that the USPTO guidance on Diamond v. Diehr includes a hypothetical example claim 2 that is patent eligible, according to the USPTO, even though the claim does not require the rubber press machine itself… Based on this example, it is clear that claiming the controlled machine itself is not necessary for subject matter eligibility. Instead, as the USPTO's subject matter eligibility guidance explains:
The totality of the steps governed by the claimed instructions provides software that improves another technical field, specifically the field of precision rubber molding, through controlling the operation of the mold by initiating a signal to control the press to open when the comparison indicates equivalence and the molded product is cured. This software enhances the ability of a specific rubber molding device to open the press at the optimal time for curing the rubber therein. This process does not merely link the Arrhenius equation to a technical field, but adds meaningful limitations on the use of the mathematical relationship by specifying the types of variables used (temperature and time), how they are selected (their relationship to the reaction time), how the process uses the variables in rubber molding, and how the result is employed to improve the operation of the press…
Similar logic applies to the instant claims. The totality of the operations in the independent claims provides software that improves another technical field, specifically the field of heating and cooling a building, through controlling the operation of the HVAC system for that building.
Accordingly, Applicant respectfully submits that the claims include patentable subject matter and meet the requirements of 35 U.S.C. § 101.” [Arguments, pages 9-11].
In response, Applicant’s arguments are considered but are not persuasive. Examiner respectfully disagrees and maintains that the present claims recite a judicial exception without significantly more. First, with respect to Diamond v. Diehr, Examiner observes that the Court found that the claimed process improves upon conventional molding processes by constantly measuring the actual temperature inside the mold using a thermocouple, and automatically feeding these temperature measurements into a standard digital computer that repeatedly recalculates the cure time by use of the Arrhenius equation.
Similarly, with regard to the hypothetical example claim 2 of Diamond v. Diehr, the totality of the steps governed by the claimed instructions provides software that improves another technical field, specifically the field of precision rubber molding, through controlling the operation of the mold by initiating a signal to control the press to open when the comparison indicates equivalence and the molded product is cured.
In contrast to Diamond v. Diehr, wherein the above-cited claims demonstrate meaningful limitations on the use of the mathematical relationship by specifying particular steps and variables in the improved process, Examiner respectfully maintains that the present invention recites generic elements (e.g. ‘a physical resource’), undefined datasets (e.g. ‘various sets of feature values’) and nondescript machine learning models to operate zones of an HVAC system. Thus, Examiner respectfully maintains that the claims do not recite additional elements, either individually or in an ordered combination, at a level of specificity required for the claims to be considered more than a drafting effort designed to monopolize the judicial exception. Accordingly, the present claims recite a judicial exception without significantly more. As such, Examiner remains unpersuaded.
35 USC § 103 Rejections
First, Applicant argues that “Applicant traverses the rejections because the cited references do not teach or suggest each and every element of the claims… Applicant does not find in the cited portions of Konrad any teaching of configuring a first HVAC zone to operate and a second HVAC zone to not operate. To the contrary, the cited aspects of Konrad explicitly teach a "modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage," i.e., that additional units in a building are added into a single overall coverage plan instead of separate zones, as claimed.
Elias and Albonesi do not cure the deficiencies of Konrad.
Accordingly, Applicant respectfully submits that all claims are patentable over the cited references” [Arguments, pages 12-13].
In response, Applicant’s arguments have been considered but are not persuasive. Examiner respectfully disagrees and maintains that the art of record renders the above-argued claims limitation obvious. In particular, Examiner directs the Applicant to (Konrad, ¶ 80, the data determined by OSSY can be used to adjust and/or turn off lighting in different spaces where OSSY sensing systems are installed. For example, in a large exhibit or meeting rooms, areas unoccupied can have the lights turned off or down. Similarly, in areas occupied, the lighting levels can be increased. In such applications, OSSY may send not only occupant counts, but information as to location of occupants to a lighting control system that would use the spatial information to adjust lighting levels. Different lighting strategies, e.g., levels/intensities, can be pre-programmed), (Id., ¶ 19, The remaining description elaborates primarily certain structural and functional details of components involved in occupancy estimation, i.e., the sensors 16 and local-area controller 14. In typical applications today, systems are limited to a binary occupied/unoccupied decision and operation. While such operation is an improvement over older systems by reducing idle ventilation, the system described herein can extend energy savings by delivering a more fine-grained air volume control over a range of room sizes, achieving greater efficiency without sacrificing ventilation quality), and to (Id., ¶ 50, Fusion systems 1, 2 and 3 make use of parametric or non-parametric, linear or non-linear systems. Fusion systems 1, 2 and 3 take into account the rate at which the number of occupants in zone is changing, specifically whether it is changing rapidly (transient state) or sporadically (quasi steady state), and accordingly diminishing the influence of the boundary count or interior count, respectively, towards the estimation of the total number of occupants. [0051] 5. A system to continuously maximize energy savings for zone based on zone type or on current, recent, or historical estimates of number of occupants in zone while simultaneously not exceeding a maximum failure rate which can be specified. This is accomplished by scaling the estimate of the number of occupants in zone at each time instant by an overestimation factor greater than or equal to one based on zone type or current, recent, or historical estimates of number of occupants in zone).
Here, Konrad discloses turning lighting zones on and off based in-part on occupancy values. Konrad further discloses that contemporary systems at the time the invention was filed used binary decisions for HVAC operation, similar to the above-argued limitation of the presently amended claims. Thus, Examiner respectfully maintains that the art of record renders the presently amended claims obvious. As such, Examiner remains unpersuaded.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1: Claims 1-15 and 17-20 are directed to statutory categories, namely a process (claims 1-10), a machine (claims 11-15) and an article of manufacture (claims 17-20).
Step 2A, Prong 1: Claims 1, 11 and 17 in part, recite the following abstract idea:
…A … method, comprising: obtaining, …, a first dataset indicative of a movement of objects associated with a first time period; extracting, …, a first set of feature values based on the first dataset; determining, with … , a density of the moving objects based on the first set of feature values; determining, with … a physical resource allocation associated with …based on the first set of feature values and a reference dataset; and dynamically configuring, …usage of one or more physical resources associated with …based on the physical resource allocation ;wherein the moving objects of the first dataset are external to the physical building space; wherein … and wherein dynamically configuring the usage of the one or more physical resources comprises dynamically configuring a first zone of… to operate and a second zone of… to not operate [Claim 1],
…obtain a first dataset indicative of a movement of objects associated with a first time period from…; extract, …, a first set of feature values based on the first dataset; determine, with… a density of the moving objects based on the first set of feature values; determine a physical resource allocation based on the first set of feature values and a reference dataset; determine, …, a dynamic configuration of one or more physical resources associated with …based on the physical resource allocation; and dynamically configure usage of the one or more physical resources associated with …based on the physical resource allocation; wherein the moving objects of the first dataset are external to the physical building space; wherein… and wherein dynamically configuring the usage of the one or more physical resources comprises dynamically configuring a first zone of… to operate and a second zone of… to not operate [Claim 11],
…obtain a first dataset indicative of a movement of objects associated with a first time period from…; extract, …, a first set of feature values based on the first dataset; determine a density of the moving objects based on the first set of feature values; determine a physical resource allocation based on the first set of feature values and a reference dataset; determine, …, a dynamic configuration of one or more physical resources associated with …based on the physical resource allocation; and dynamically configuring usage of the one or more physical resources associated with …based on the physical resource allocation; wherein the moving objects of the first dataset are external to the physical building space; wherein… and wherein dynamically configuring the usage of the one or more physical resources comprises dynamically configuring a first zone of… to operate and a second zone of… to not operate [Claim 17].
These concepts are not meaningfully different than the following concepts identified by the MPEP:
Concepts relating to certain methods of organizing human activity. The aforementioned limitations describe steps for fundamental economic principles or practices. Specifically, configuring the usage of resources based on resource allocations is considered to describe a fundamental economic practice. As such, claims 1, 11 and 17 recite concepts identified as abstract ideas.
Dependent claims 2-10, 12-16 and 18-20 recite limitations relative to the independent claims, including, for example:
…wherein… the method further comprising: determining… the dynamic configuration of one or more physical resources … based on the physical resource allocation [Claim 2],
…wherein the dynamic configuration of the one or more physical resources further comprises: allocating a plurality of workstations … based on a ranking of each workstation of the plurality of workstations [Claim 3],
…obtaining… data indicative of the movement of objects associated with a second time period; extracting… a second set of feature values based on the data indicative of the movement of objects associated with the second time period; obtaining, … data representative of the physical resource allocation based on the second time period; determining, …, a reference dataset based on the second set of feature values and the physical resource allocation data; and … based on the reference dataset [Claim 4],
…wherein the first dataset comprises one or more images captured… [Claim 5],
…wherein extracting the first set of feature values from the one or more images further comprises: applying, …, one or more computer vision techniques to the one or more images, identifying, …, one or more pixel groups in the one or more images, determining, …, a first set of characteristics based on the one or more images and associating the first set of characteristics to the one or more pixel groups, and deriving, …, a traffic density based on the first set of characteristics and the first set of feature values [Claim 6].
The limitations of these dependent claims are merely narrowing the abstract idea identified in the independent claims, and thus, the dependent claims also recite abstract ideas.
Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, claims 1, 11 and 17 only recite the following additional elements –
… computer-implemented… by a computer system from a first computing device…; … by the computer system and via execution of a machine learning model…; …the machine learning model of the computer system..; … with the machine learning model of the computer system… a physical building space; … by the computer system… the physical building space…; …the physical building space; wherein the physical building space comprises a HVAC system… …the HVAC system… the HVAC system… [Claim 1],
…A system comprising: one or more processors; and a non-transitory computer readable medium having stored thereon instructions that are executable by the one or more processors to cause the system to perform operations comprising… a first computing device; … via execution of a first machine learning model…; …the first machine learning model...; …via execution of a second machine learning model… ; … the physical building space…; …the physical building space; wherein the physical building space comprises a HVAC system… …the HVAC system… the HVAC system… [Claim 11],
…A computer program product embodied on one or more non-transitory computer readable media having stored thereon instructions that are executable by one or more processors to cause the computer program product to perform operations comprising… a first computing device; … via execution of a first machine learning model…; …via execution of a second machine learning model… a physical building space…; the physical building space; …the physical building space; wherein the physical building space comprises a HVAC system… …the HVAC system… the HVAC system… [Claim 17].
The dependent claims recite the following new additional elements –
…the machine learning model is a first machine learning model… by the computer system and via execution of a second machine learning model… [Claim 2],
…one or more recording devices... [Claim 5].
The HVAC system, machine learning models and executable instructions are recited at a high-level of generality (see MPEP § 2106.05(a)), like the following MPEP example:
iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48;
Furthermore, the computer implemented element is considered to amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)), like the following MPEP example:
i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
Accordingly, these additional elements do not integrate the abstract idea into a practical application.
The remaining dependent claims do not recite any new additional elements, and thus do not integrate the abstract idea into a practical application.
Step 2B: Claims 1, 11 and 17 and their underlying limitations, steps, features and terms, considered both individually and as a whole, do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the following reasons:
Independent claims 1, 11 and 17 only recite the following additional elements –
… computer-implemented… by a computer system from a first computing device…; … by the computer system and via execution of a machine learning model…; …the machine learning model of the computer system..; … with the machine learning model of the computer system… a physical building space; … by the computer system… the physical building space…; …the physical building space; wherein the physical building space comprises a HVAC system… …the HVAC system… the HVAC system… [Claim 1],
…A system comprising: one or more processors; and a non-transitory computer readable medium having stored thereon instructions that are executable by the one or more processors to cause the system to perform operations comprising… a first computing device; … via execution of a first machine learning model…; …the first machine learning model...; …via execution of a second machine learning model… ; … the physical building space…; …the physical building space; wherein the physical building space comprises a HVAC system… …the HVAC system… the HVAC system… [Claim 11],
…A computer program product embodied on one or more non-transitory computer readable media having stored thereon instructions that are executable by one or more processors to cause the computer program product to perform operations comprising… a first computing device; … via execution of a first machine learning model…; …via execution of a second machine learning model… a physical building space…; the physical building space; …the physical building space; wherein the physical building space comprises a HVAC system… …the HVAC system… the HVAC system… [Claim 17].
These elements do not amount to significantly more than the abstract idea for the reasons discussed in 2A prong 2 with regard to MPEP 2106.05(a) and MPEP 2106.05(f). By the failure of the elements to integrate the abstract idea into a practical application there, the additional elements likewise fail to amount to an inventive concept that is significantly more than an abstract idea here, in Step 2B.
As such, both individually or in combination, these limitations do not add significantly more to the judicial exception.
The remaining dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the dependent claims do not recite any new additional elements other than those mentioned in the independent claims, which amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). As such, these claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-8, 11-14 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Konrad et al., U.S. Publication No. 2019/0360714 [hereinafter Konrad] in view of Elias et al., WO 2021/007293 [hereinafter Elias].
Regarding Claim 1, Konrad discloses … A computer-implemented method, comprising: obtaining, by a computer system from a first computing device, a first dataset indicative of a movement of objects associated with a first time period (Konrad, ¶ 30, Assuming that the system operates based on a given rate of occupancy estimation, such as once per minute for example, the SCNs 42 aggregate data and respond within such a time period. The sensing units preferably acquire data at a rate compatible with occupancy variations (cameras 32) or body speed (door sensors 34) (discloses obtaining movement data associated with a time period) to minimize the potential for aliasing. Since cameras 32 are responsible for steady-state occupancy data, a frame rate of about 1 Hz should be adequate. A 3.0 MPixel panoramic camera typically produces a bit rate of about 10 Mb/s for high-quality 30 Hz video using H.264/AVC compression, but this rate would drop to about 330 Kb/s at 1 Hz. Multiple cameras can be easily supported by WiFi or wired Ethernet (CATS in legacy and CAT6 in new buildings). Use of PoE, providing DC power, can additionally reduce installation costs, and is supported by CATS wiring. To assure accurate ingress/egress detection, door sensors 34 preferably sample at 10-20 Hz, but at 16×4 resolution this would result in no more than 40 Kb/s of uncompressed data rate. This rate is compatible with lower-rate communications connections such as ZigBee, although it may be preferred to use WiFi or wired Ethernet for commonality with the cameras 32), (Id., ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 22, FIG. 4 shows a local area 10 with a more structural focus, including a variable air volume (VAV) box 40 as an example of local-area equipment 12 (FIG. 1), and a shared computing node (SCN) 42 as an example of a local-area controller 14. Also shown is a separate building automation system (BAS) 44 as an example of a central controller 26 (also FIG. 1), and communications connections 46 between the SCN 42 and the sensors 32, 34 as well as the BAS 44. The connections 46 may be realized in various ways including as wireless connections (e.g., WiFi) and/or wired connections such as a Ethernet, either powered (PoE) or unpowered. The system may be realized as a standalone system (i.e., not connected to an external network or “cloud”), with one or more SCNs 42 providing data processing and fusion for multiple venues in the same control zone);
PNG
media_image1.png
231
561
media_image1.png
Greyscale
extracting, by the computer system and via execution of a machine learning model, a first set of feature values based on the first dataset (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
determining, with the machine learning model of the computer system, a density of the moving objects based on the first set of feature values (Id., ¶ 31, In the case of crowd density estimation from a panoramic camera, every pixel contributes in some proportion to a body count but this proportion is dependent on pixel location on the sensor (e.g., a pixel in the middle of a sensor, parallel to room's floor, will occupy a smaller fraction of human head, than a pixel at sensor's periphery, due to lens properties). However, the knowledge of intrinsic camera parameters, such as sensor size and resolution, focal length, lens diameter and barrel distortion, can be used to establish a relationship between pixel location and its contribution to crowd density (pixel-to-density mapping), very much like in methods to de-warp a fisheye image for visualization. Alternatively, a pixel-to-density mapping can be obtained experimentally in a room of maximum permissible size for various installation heights and camera models, and stored in a look-up table to use during deployment, thus making a crowd density estimation algorithm agnostic to camera installation height and room size. A similar mapping can be obtained for LR thermal sensors (both “tripwire” and room-view). Additionally, some thermal sensors such as Melexis sensors are available with different lenses (40°, 60°, 120° FOVs) allowing to match them to different combinations of room height and door width), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification, and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
determining, with the machine learning model of the computer system, a physical resource allocation associated with a physical building space based on the first set of feature values and a reference dataset (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods (discloses machine learning) which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 20, FIG. 2 illustrates an aspect of the disclosed approach that can facilitate system scalability while supporting multiple occupancy-sensing modalities, for an area shown as a “unit volume” 30 such as a room. Two distinct types of sensor nodes may deployed in various combinations: interior sensors such as high-resolution (HR) panoramic overhead cameras 32 for wide-area monitoring, and boundary sensors such as low-resolution (LR) thermal sensors 34 located at doorways for ingress/egress detection. The use of panoramic cameras 32 can help minimize the number of sensors needed, thus reducing installation costs while still supporting scalability to large-size venues. The door sensors 34 may serve several roles. First, they provide transient phase data for fusion (discloses reference dataset) with steady-state occupancy data from the overhead cameras 32 or other interior sensors when used. For this purpose, in some cases a door sensor 34 may be as simple as a “tripwire”, shown as a “T Door Sensor 36”, that only detects ingress/egress. Such a tripwire sensor 36 may employ low-resolution (LR) thermal sensing for example. Secondly, in small-venue scenarios where panoramic cameras 32 are not used, the door sensors 34 may be realized as TRV door sensors 38 equipped with both an LR thermal “tripwire” (pointing down at the door opening) and an LR “room view” thermal array pointed into the room, for determining both transient and steady-state phase of occupancy. Additionally, if the door sensors 34 collect only LR thermal data, they are generally suitable for privacy-sensitive areas such as restrooms etc.), (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy), (Id., ¶ 24, the local equipment controller 62 may convert the occupancy estimate 66 into a corresponding fraction of maximum occupancy, and control airflow accordingly. Thus if the occupancy is at 50% of maximum, for example, the local-area airflow is adjusted to 50% of maximum airflow. As previously indicated, the local equipment controller 62 may also communicate with the central controller 26 in support of broader system-level control);
and dynamically configuring, by the computer system, usage of one or more physical resources associated with the physical building space based on the physical resource allocation (Id., ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part);
wherein the physical building space comprises a HVAC system, and wherein dynamically configuring the usage of the one or more physical resources comprises dynamically a first zone of the HVAC system to operate and a second zone of the HVAC system to not operate (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 76, To determine the HVAC energy savings that can be achieved with the occupancy sensing system, a data-driven energy savings model based on building HVAC equipment specifications, current air supply levels, and actual building-use data obtained in the validation study. Table 3 below shows example airflow estimates that might be obtained using a Ventilation Airflow Model (VAM). While this model is representative of education and research environments in particular, many aspects of commercial office buildings are also represented in this example including offices, conference rooms, and large meeting spaces. This model includes air required as a function of both area (resulting in fixed airflow) and variable occupancy (as per ASHRAE 62.1-2013), so that the average yearly occupancy does not directly determine HVAC energy and cost reduction. This analysis indicates that airflow and HVAC energy use can be reduced by 39% if accurate occupancy data were available. In some cases depending on the exact nature and use of the building, there may be potential for even greater reduction), (Id., ¶ 80, the data determined by OSSY can be used to adjust and/or turn off lighting in different spaces where OSSY sensing systems are installed. For example, in a large exhibit or meeting rooms, areas unoccupied can have the lights turned off or down. Similarly, in areas occupied, the lighting levels can be increased. In such applications, OSSY may send not only occupant counts, but information as to location of occupants to a lighting control system that would use the spatial information to adjust lighting levels. Different lighting strategies, e.g., levels/intensities, can be pre-programmed), (Id., ¶ 19, The remaining description elaborates primarily certain structural and functional details of components involved in occupancy estimation, i.e., the sensors 16 and local-area controller 14. In typical applications today, systems are limited to a binary occupied/unoccupied decision and operation. While such operation is an improvement over older systems by reducing idle ventilation, the system described herein can extend energy savings by delivering a more fine-grained air volume control over a range of room sizes, achieving greater efficiency without sacrificing ventilation quality), (Id., ¶ 50, Fusion systems 1, 2 and 3 make use of parametric or non-parametric, linear or non-linear systems. Fusion systems 1, 2 and 3 take into account the rate at which the number of occupants in zone is changing, specifically whether it is changing rapidly (transient state) or sporadically (quasi steady state), and accordingly diminishing the influence of the boundary count or interior count, respectively, towards the estimation of the total number of occupants. [0051] 5. A system to continuously maximize energy savings for zone based on zone type or on current, recent, or historical estimates of number of occupants in zone while simultaneously not exceeding a maximum failure rate which can be specified. This is accomplished by scaling the estimate of the number of occupants in zone at each time instant by an overestimation factor greater than or equal to one based on zone type or current, recent, or historical estimates of number of occupants in zone).
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose …wherein the moving objects of the first dataset are external to the physical building space;
However, Elias discloses …wherein the moving objects of the first dataset are external to the physical building space (Elias, ¶ 2, This disclosure relates to sensing and monitoring, and more specifically to sensing and monitoring certain spaces and areas for human occupancy. This disclosure is also related to sensing and monitoring movement or changes in an environment, including movement by animals and objects. This disclosure is also related to using sensor and/or monitor information to control certain systems within commercial and residential facilities, including, but not limited to; heating, cooling, ventilation, security, lighting, power, and entertainment systems and the like. This disclosure also may be used to determine human occupancy in outdoor spaces and to control certain outdoor systems including, but not limited to; heating, cooling, ventilation, security, lighting, power, and entertainment systems and the like), (Id., ¶ 115, While this disclosure has described a sensor system 100 inside or outside a building, the sensor system 100 can be applied to other types of scenarios and other space(s) 105. For example only, the disclosed sensor systems 100 could be used to detect the presence of humans in disaster scenarios such as collapsed buildings, caves, mines and the like. In such scenarios, the sensor systems 100 could be used to determine if and how many humans are breathing and at what rate their hearts are beating. Likewise, the sensor systems 100 could be used to determine if and how many humans might be hidden in an enclosure during a hostage or kidnapping situation and may determine if and how many humans are enclosed in a container such as a shipping crate, a trucking crate, below deck on a boat, and the like. In addition to human presence, the sensor systems 100 could be used to monitor the health of humans and/or animals in an area. For example only, this disclosure could generate an output signal that is related to the breathing rate and or heartrate of any living beings within a space 105. Such sensor systems 100 could be used to monitor the breathing of babies and protect against sudden infant death syndrome. Such systems could also monitor the sleeping of people with sleep apnea and sound an alarm or adjust a bed or environmental setting if a person’s breathing becomes too erratic or stops).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad to include the vehicle pathway and public transportation elements of Elias in the analogous art of detecting occupancy using radio signals.
The motivation for doing so would have been to improve an ability to “adjust the HVAC system to the appropriate level for the unoccupied or under-occupied conditions” (Elias, ¶ 3), wherein such improvements would benefit Konrad’s method which enables “adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation)” [Elias, ¶ 3; Konrad, ¶ 16].
Regarding Claim 2, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 1…
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose …wherein the machine learning model is a first machine learning model, the method further comprising: determining, by the computer system and via execution of a second machine learning model, the dynamic configuration of one or more physical resources associated with the physical building space based on the physical resource allocation.
However, through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses …wherein the machine learning model is a first machine learning model, the method further comprising: determining, by the computer system and via execution of a second machine learning model, the dynamic configuration of one or more physical resources associated with the physical building space based on the physical resource allocation.
First, Konrad discloses machine learning modeling techniques for determining occupancy of a building (Konrad, ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level).
Further, Konrad discloses dynamic configuration of resources in a building based on a resource allocation (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part).
One of ordinary skill in the art would have recognized that applying the known machine learning technique of Konrad would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the known machine learning technique of Konrad to the dynamic resource configuration step would have yielded predictable results because the level of ordinary skill in the art demonstrated by the reference applied shows the ability to incorporate such resource optimization features into similar energy efficiency systems. Further, applying the machine learning technique to the dynamic resource configuration step, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for faster and more optimal adjustments to resource allocations based on observed object movements within a building.
Thus, through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses …wherein the machine learning model is a first machine learning model, the method further comprising: determining, by the computer system and via execution of a second machine learning model, the dynamic configuration of one or more physical resources associated with the physical building space based on the physical resource allocation.
Regarding Claim 3, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 2…
Konrad further discloses …wherein the dynamic configuration of the one or more physical resources further comprises: allocating a plurality of workstations associated with the physical building space based on a ranking of each workstation of the plurality of workstations (Id., ¶ 81, OSSY occupancy data can be used to determine what areas of the building are occupied, as well as occupant density and numbers, and this information can be provided to internal building systems and emergency responders. For example, this data can be used to enable/prevent access to different parts of the building through closing different door systems electromechanically. The data can also be used to trigger different lighting systems or audio systems to advise building occupants as to the appropriate action in different spaces of the buildings, (discloses allocating workspaces within a building) or to indicate to safety/security staff the locations of occupants in the building. For these applications, aggregated OSSY data can be sent to a security computer system and that data either presented to security staff in aggregated manner, and/or have actions taken in terms of access control, lighting signals, or information systems being implemented in an automatic fashion), (Id., ¶ 82, In terms of space utilization applications, OSSY may send data for different spaces to a central computer or web based system that aggregates the data and provides different utilization metrics such as capacity utilization that could be temporally disaggregated. In one application, this can provide information to a real-time scheduling system that can indicate to building managers and occupants which spaces are currently being utilized, or are expected to be utilized in a specified time range. In another application, this system can indicate to building managers how well their space is currently configured and used in terms of capacity utilization and temporally. Furthermore, information as to the building cost (rental, operational, etc.) for different areas can be combined to provide a more explicit evaluation of cost performance), (Id., Table 2, table indicates occupancy rankings for workstations within a building).
PNG
media_image2.png
178
399
media_image2.png
Greyscale
Regarding Claim 4, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 1…
Konrad further discloses … obtaining, by the computer system from the first computing device, data indicative of the movement of objects associated with a second time period; period (Konrad, ¶ 30, Assuming that the system operates based on a given rate of occupancy estimation, such as once per minute for example, the SCNs 42 aggregate data and respond within such a time period. The sensing units preferably acquire data at a rate compatible with occupancy variations (cameras 32) or body speed (door sensors 34) (discloses obtaining movement data associated with a time period) to minimize the potential for aliasing. Since cameras 32 are responsible for steady-state occupancy data, a frame rate of about 1 Hz should be adequate. A 3.0 MPixel panoramic camera typically produces a bit rate of about 10 Mb/s for high-quality 30 Hz video using H.264/AVC compression, but this rate would drop to about 330 Kb/s at 1 Hz. Multiple cameras can be easily supported by WiFi or wired Ethernet (CATS in legacy and CAT6 in new buildings). Use of PoE, providing DC power, can additionally reduce installation costs, and is supported by CATS wiring. To assure accurate ingress/egress detection, door sensors 34 preferably sample at 10-20 Hz, but at 16×4 resolution this would result in no more than 40 Kb/s of uncompressed data rate. This rate is compatible with lower-rate communications connections such as ZigBee, although it may be preferred to use WiFi or wired Ethernet for commonality with the cameras 32), (Id., ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 22, FIG. 4 shows a local area 10 with a more structural focus, including a variable air volume (VAV) box 40 as an example of local-area equipment 12 (FIG. 1), and a shared computing node (SCN) 42 as an example of a local-area controller 14. Also shown is a separate building automation system (BAS) 44 as an example of a central controller 26 (also FIG. 1), and communications connections 46 between the SCN 42 and the sensors 32, 34 as well as the BAS 44. The connections 46 may be realized in various ways including as wireless connections (e.g., WiFi) and/or wired connections such as a Ethernet, either powered (PoE) or unpowered. The system may be realized as a standalone system (i.e., not connected to an external network or “cloud”), with one or more SCNs 42 providing data processing and fusion for multiple venues in the same control zone);
extracting, by the computer system and via execution of the machine learning model, a second set of feature values based on the data indicative of the movement of objects associated with the second time period (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
obtaining, by the computer system, data representative of the physical resource allocation based on the second time period dataset (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 20, FIG. 2 illustrates an aspect of the disclosed approach that can facilitate system scalability while supporting multiple occupancy-sensing modalities, for an area shown as a “unit volume” 30 such as a room. Two distinct types of sensor nodes may deployed in various combinations: interior sensors such as high-resolution (HR) panoramic overhead cameras 32 for wide-area monitoring, and boundary sensors such as low-resolution (LR) thermal sensors 34 located at doorways for ingress/egress detection. The use of panoramic cameras 32 can help minimize the number of sensors needed, thus reducing installation costs while still supporting scalability to large-size venues. The door sensors 34 may serve several roles. First, they provide transient phase data for fusion (discloses reference dataset) with steady-state occupancy data from the overhead cameras 32 or other interior sensors when used. For this purpose, in some cases a door sensor 34 may be as simple as a “tripwire”, shown as a “T Door Sensor 36”, that only detects ingress/egress. Such a tripwire sensor 36 may employ low-resolution (LR) thermal sensing for example. Secondly, in small-venue scenarios where panoramic cameras 32 are not used, the door sensors 34 may be realized as TRV door sensors 38 equipped with both an LR thermal “tripwire” (pointing down at the door opening) and an LR “room view” thermal array pointed into the room, for determining both transient and steady-state phase of occupancy. Additionally, if the door sensors 34 collect only LR thermal data, they are generally suitable for privacy-sensitive areas such as restrooms etc.), (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy), (Id., ¶ 24, the local equipment controller 62 may convert the occupancy estimate 66 into a corresponding fraction of maximum occupancy, and control airflow accordingly. Thus if the occupancy is at 50% of maximum, for example, the local-area airflow is adjusted to 50% of maximum airflow. As previously indicated, the local equipment controller 62 may also communicate with the central controller 26 in support of broader system-level control);
determining, by the computer system, a reference dataset based on the second set of feature values and the physical resource allocation data; and training, by the computer system, the machine learning model based on the reference dataset (Id., ¶ 45, a fusion algorithm can combine both raw data and decisions generated by different sensors through a complex, generally nonlinear relationship, e.g., kernel support vector regression and neural networks which can be trained using machine learning techniques. However, such an algorithm may be difficult to train (too many parameters relative to training data size) or may not generalize well to new deployment conditions without significant labor-intensive re-training that would impede self-commissioning and drive up cost. An alternative approach is to employ a recursive Bayesian filtering method like Kalman filtering (linear, extended, or unscented transform) or particle filtering with the system dynamics learned offline from training data. (discloses training with reference training dataset) However, this can be computationally intensive for video data due to its high dimensionality. Hence while in general such options are not excluded, the present description assumes use of relatively simple-to-train adaptive algorithms that fuse occupancy estimates rather than raw data), (Id., ¶ 64, A lookup table may be designed offline using ground-truth training data and regression techniques. The end result will be a coarsely-quantized map (table lookup) from environmental conditions to values for τ[t], λ[t]. The lookup table encodes changes to τ[t], λ[t] relative to environmental conditions. If the rate of occupancy change is high (fast-moving crowds), then a transient phase is in operation and λ[t] should be decreased to give more weight to the door sensor estimates and τ[t] should be decreased to deemphasize older measurements. If the rate of occupancy change is low, then a quasi-steady state is in effect and τ[t] can be increased. Further, if the illumination is good, then λ[t] should be increased to give more weight to the overhead camera estimates. Similarly, if the ambient lighting changes rapidly (e.g., for a slide show) then the camera estimates should be de-weighted by decreasing λ[t]).
Regarding Claim 5, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 1…
Konrad further discloses …wherein the first dataset comprises one or more images captured by one or more recording devices (Konrad, ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate).
Regarding Claim 6, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 5…
Konrad further discloses … wherein extracting the first set of feature values from the one or more images further comprises: applying, via the machine learning model, one or more computer vision techniques to the one or more images, identifying, by the computer system, one or more pixel groups in the one or more images, determining, by the computer system, a first set of characteristics based on the one or more images and associating the first set of characteristics to the one or more pixel groups, and deriving, by the computer system, a traffic density based on the first set of characteristics and the first set of feature values (Konrad, ¶ 35, a system can employ multiple algorithms for people counting using data captured by a panoramic camera (640×480, 0.3 MPixels) mounted overhead in a room such as a computer lab. Variants of crowd density estimation are used which learn a mapping between the fraction of a video frame that has changed and the number of people in that frame—the more changes, the more people. Example algorithms include regression, Support Vector Machine (SVM) and k-Nearest Neighbor (kNN) search. Table 1 below shows a Correct Classification Rate (CCR) that can be obtained for people counting using these algorithms on data captured across several days with a specified number (e.g., 0 to 10) of occupants. Results from such a simple test case show that even simple change detection may provide 96% of accuracy with 0.01 mean absolute error (MAE) per occupant (kNN for k=5). Such an MAE value may be within maximum permissible limits needed to achieve desired performance targets), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification, and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level), (Id., ¶ 31, Another factor is the configuration or “commissioning” of a system into operation. To support a variety of venue configurations, it is preferable that algorithms be agnostic to configuration variations, e.g., camera/sensor installation height, room size and shape. In the case of human-body counting, the camera installation height and room size affect a projected body size and, therefore, call for a scale-invariant human-body detector, which is a problem considered to have been solved. In the case of crowd density estimation from a panoramic camera, every pixel contributes in some proportion to a body count but this proportion is dependent on pixel location on the sensor (e.g., a pixel in the middle of a sensor, parallel to room's floor, will occupy a smaller fraction of human head, than a pixel at sensor's periphery, due to lens properties). However, the knowledge of intrinsic camera parameters, such as sensor size and resolution, focal length, lens diameter and barrel distortion, can be used to establish a relationship between pixel location and its contribution to crowd density (pixel-to-density mapping), very much like in methods to de-warp a fisheye image for visualization. Alternatively, a pixel-to-density mapping can be obtained experimentally in a room of maximum permissible size for various installation heights and camera models, and stored in a look-up table to use during deployment, thus making a crowd density estimation algorithm agnostic to camera installation height and room size. A similar mapping can be obtained for LR thermal sensors (both “tripwire” and room-view). Additionally, some thermal sensors such as Melexis sensors are available with different lenses (40°, 60°, 120° FOVs) allowing to match them to different combinations of room height and door width).
Regarding Claim 7, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 5…
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose …wherein the one or more images comprise scenes of a vehicle pathway.
However, Elias discloses …wherein the one or more images comprise scenes of a vehicle pathway (Elias, ¶ 151, The sensor system 100 may include a physical process observation system such as for tracking physical activities of workers that may be used for determining value chain recommendations. Physical activities of workers (e.g., shippers, delivery workers, packers, pickers, assembly personnel, customers, merchants, vendors, distributors and others), physical interactions of workers with other workers, interactions of workers with physical entities like machines and equipment, and interactions of physical entities with other physical entities, including, without limitation, by use of video and still image cameras, motion sensing systems (such as including optical sensors, LIDAR, IR and other sensor sets), robotic motion tracking systems (such as tracking movements of systems attached to a human or a physical entity) and many others. Machine state monitoring systems may include onboard monitors and external monitors of conditions, states, operating parameters, or other measures of the condition of any value chain entity, such as a machine or component thereof, such as a machine, such as a client, a server, a cloud resource, a control system, a display screen, a sensor, a camera, a vehicle, a robot, or other machine. Sensors and cameras and other loT data collection systems (including onboard sensors, sensors or other data collectors (including click tracking sensors) in or about a value chain environment (such as, without limitation, a point of origin, a loading or unloading dock, (discloses vehicle pathway images) a vehicle or floating asset used to convey goods, a container, a port, a distribution center, a storage facility, a warehouse, a delivery vehicle, and a point of destination), cameras for monitoring an entire environment, dedicated cameras for a particular machine, process, worker, or the like, wearable cameras, portable cameras, cameras disposed on mobile robots, cameras of portable devices like smart phones and tablets, and many others), (Id., ¶ 152, The sensor system 100 may interact with value chain network entities based on worker data such as locations of workers (including routes taken through a location, where workers of a given type are located during a given set of events, processes or the like, how workers manipulate pieces of equipment, cargo, containers, packages, products or other items using various tools, equipment, and physical interfaces, the timing of worker responses with respect to various events such as responses to alerts and warnings), procedures by which workers undertake scheduled deliveries, movements, maintenance, updates, repairs and service processes; procedures by which workers tune or adjust items involved in workflows, and many others. The sensor system may include a physical process observation that may include tracking positions, angles, forces, velocities, acceleration, pressures, torque, and the like of a worker as the worker operates on hardware, such as on a container or package, or on a piece of equipment involved in handling products, with a tool. Such observations may be obtained by any combination of video data, data detected within a machine (such as of positions of elements of the machine detected and reported by position detectors), data collected by a wearable device (such as an exoskeleton that contains position detectors, force detectors, torque detectors and the like that is configured to detect the physical characteristics of interactions of a human worker with a hardware item for purposes of developing a training data set). The sensor system 100 may use this physical activities data and worker data (e.g., physical process interaction observations) for determining value chain recommendations (e.g., training suggested where needed) in order to improve value chain workflows), (Id., ¶ 19, In embodiments, the value chain recommendation is based on logistics factors that include one or more of: a type of product corresponding to the proposed logistics solution, one or more features of the type of product, a location of a manufacturing site, a location of a distribution facility, a location of a warehouse, a location of a customer base, proposed expansion areas of the organization, and supply chain features. In embodiments, the value chain recommendation is based on logistics value chain network entities that are selected from the group consisting of products, suppliers, producers, manufacturers, retailers, businesses, owners, operators, operating facilities, customers, consumers, workers, mobile devices, wearable devices, distributors, resellers, supply chain infrastructure facilities, supply chain processes, logistics processes, reverse logistics processes, demand prediction processes, demand management processes, demand aggregation processes, machines, ships, barges, warehouses, maritime ports, airports, airways, waterways, roadways, railways, bridges, tunnels, online retailers, e-commerce sites, demand factors, supply factors, delivery systems, floating assets, points of origin, points of destination, points of storage, points of use, networks, information technology systems, software platforms, distribution centers, fulfillment centers, containers, container handling facilities, customs, export control, border control, drones, robots, autonomous vehicles, hauling facilities, drones/robots/AVs, waterways, and port infrastructure facilities).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad to include the vehicle pathway and public transportation elements of Elias in the analogous art of detecting occupancy using radio signals for the same reasons as stated for claim 1.
Regarding Claim 8, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 5…
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose …wherein the one or more images comprise scenes of one or more public transportation stations.
However, Elias discloses …wherein the one or more images comprise scenes of one or more public transportation stations (Elias, ¶ 151, The sensor system 100 may include a physical process observation system such as for tracking physical activities of workers that may be used for determining value chain recommendations. Physical activities of workers (e.g., shippers, delivery workers, packers, pickers, assembly personnel, customers, merchants, vendors, distributors and others), physical interactions of workers with other workers, interactions of workers with physical entities like machines and equipment, and interactions of physical entities with other physical entities, including, without limitation, by use of video and still image cameras, motion sensing systems (such as including optical sensors, LIDAR, IR and other sensor sets), robotic motion tracking systems (such as tracking movements of systems attached to a human or a physical entity) and many others. Machine state monitoring systems may include onboard monitors and external monitors of conditions, states, operating parameters, or other measures of the condition of any value chain entity, such as a machine or component thereof, such as a machine, such as a client, a server, a cloud resource, a control system, a display screen, a sensor, a camera, a vehicle, a robot, or other machine. Sensors and cameras and other loT data collection systems (including onboard sensors, sensors or other data collectors (including click tracking sensors) in or about a value chain environment (such as, without limitation, a point of origin, a loading or unloading dock, (discloses vehicle pathway images) a vehicle or floating asset used to convey goods, a container, a port, a distribution center, a storage facility, a warehouse, a delivery vehicle, and a point of destination), cameras for monitoring an entire environment, dedicated cameras for a particular machine, process, worker, or the like, wearable cameras, portable cameras, cameras disposed on mobile robots, cameras of portable devices like smart phones and tablets, and many others), (Id., ¶ 19, In embodiments, the value chain recommendation is based on logistics factors that include one or more of: a type of product corresponding to the proposed logistics solution, one or more features of the type of product, a location of a manufacturing site, a location of a distribution facility, a location of a warehouse, a location of a customer base, proposed expansion areas of the organization, and supply chain features. In embodiments, the value chain recommendation is based on logistics value chain network entities that are selected from the group consisting of products, suppliers, producers, manufacturers, retailers, businesses, owners, operators, operating facilities, customers, consumers, workers, mobile devices, wearable devices, distributors, resellers, supply chain infrastructure facilities, supply chain processes, logistics processes, reverse logistics processes, demand prediction processes, demand management processes, demand aggregation processes, machines, ships, barges, warehouses, maritime ports, airports, airways, waterways, roadways, railways, bridges, tunnels, online retailers, e-commerce sites, demand factors, supply factors, delivery systems, floating assets, points of origin, points of destination, points of storage, points of use, networks, information technology systems, software platforms, distribution centers, fulfillment centers, containers, container handling facilities, customs, export control, border control, drones, robots, autonomous vehicles, hauling facilities, drones/robots/AVs, waterways, and port infrastructure facilities).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad to include the vehicle pathway and public transportation elements of Elias in the analogous art of detecting occupancy using radio signals for the same reasons as stated for claim 1.
Regarding Claim 11, Konrad discloses … A system comprising: one or more processors; and a non-transitory computer readable medium having stored thereon instructions that are executable by the one or more processors to cause the system to perform operations comprising: obtain a first dataset indicative of a movement of objects associated with a first time period (Konrad, ¶ 23, Application software may be stored on a non-transitory computer-readable medium such as an optical or magnetic disk, Flash memory or other non-volatile semiconductor memory, etc., from which it is retrieved for execution by the processing circuitry, as generally known in the art), (Id., ¶ 30, Assuming that the system operates based on a given rate of occupancy estimation, such as once per minute for example, the SCNs 42 aggregate data and respond within such a time period. The sensing units preferably acquire data at a rate compatible with occupancy variations (cameras 32) or body speed (door sensors 34) (discloses obtaining movement data associated with a time period) to minimize the potential for aliasing. Since cameras 32 are responsible for steady-state occupancy data, a frame rate of about 1 Hz should be adequate. A 3.0 MPixel panoramic camera typically produces a bit rate of about 10 Mb/s for high-quality 30 Hz video using H.264/AVC compression, but this rate would drop to about 330 Kb/s at 1 Hz. Multiple cameras can be easily supported by WiFi or wired Ethernet (CATS in legacy and CAT6 in new buildings). Use of PoE, providing DC power, can additionally reduce installation costs, and is supported by CATS wiring. To assure accurate ingress/egress detection, door sensors 34 preferably sample at 10-20 Hz, but at 16×4 resolution this would result in no more than 40 Kb/s of uncompressed data rate. This rate is compatible with lower-rate communications connections such as ZigBee, although it may be preferred to use WiFi or wired Ethernet for commonality with the cameras 32), (Id., ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 22, FIG. 4 shows a local area 10 with a more structural focus, including a variable air volume (VAV) box 40 as an example of local-area equipment 12 (FIG. 1), and a shared computing node (SCN) 42 as an example of a local-area controller 14. Also shown is a separate building automation system (BAS) 44 as an example of a central controller 26 (also FIG. 1), and communications connections 46 between the SCN 42 and the sensors 32, 34 as well as the BAS 44. The connections 46 may be realized in various ways including as wireless connections (e.g., WiFi) and/or wired connections such as a Ethernet, either powered (PoE) or unpowered. The system may be realized as a standalone system (i.e., not connected to an external network or “cloud”), with one or more SCNs 42 providing data processing and fusion for multiple venues in the same control zone);
PNG
media_image1.png
231
561
media_image1.png
Greyscale
extract, via execution of a first machine learning model, a first set of feature values based on the first dataset (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
determine, with the first machine learning model, a density of the moving objects based on the first set of feature values (Id., ¶ 31, Another factor is the configuration or “commissioning” of a system into operation. To support a variety of venue configurations, it is preferable that algorithms be agnostic to configuration variations, e.g., camera/sensor installation height, room size and shape. In the case of human-body counting, the camera installation height and room size affect a projected body size and, therefore, call for a scale-invariant human-body detector, which is a problem considered to have been solved. In the case of crowd density estimation from a panoramic camera, every pixel contributes in some proportion to a body count but this proportion is dependent on pixel location on the sensor (e.g., a pixel in the middle of a sensor, parallel to room's floor, will occupy a smaller fraction of human head, than a pixel at sensor's periphery, due to lens properties). However, the knowledge of intrinsic camera parameters, such as sensor size and resolution, focal length, lens diameter and barrel distortion, can be used to establish a relationship between pixel location and its contribution to crowd density (pixel-to-density mapping), very much like in methods to de-warp a fisheye image for visualization. Alternatively, a pixel-to-density mapping can be obtained experimentally in a room of maximum permissible size for various installation heights and camera models, and stored in a look-up table to use during deployment, thus making a crowd density estimation algorithm agnostic to camera installation height and room size. A similar mapping can be obtained for LR thermal sensors (both “tripwire” and room-view). Additionally, some thermal sensors such as Melexis sensors are available with different lenses (40°, 60°, 120° FOVs) allowing to match them to different combinations of room height and door width), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification, and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
determine a physical resource allocation based on the first set of feature values and a reference dataset (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 20, FIG. 2 illustrates an aspect of the disclosed approach that can facilitate system scalability while supporting multiple occupancy-sensing modalities, for an area shown as a “unit volume” 30 such as a room. Two distinct types of sensor nodes may deployed in various combinations: interior sensors such as high-resolution (HR) panoramic overhead cameras 32 for wide-area monitoring, and boundary sensors such as low-resolution (LR) thermal sensors 34 located at doorways for ingress/egress detection. The use of panoramic cameras 32 can help minimize the number of sensors needed, thus reducing installation costs while still supporting scalability to large-size venues. The door sensors 34 may serve several roles. First, they provide transient phase data for fusion (discloses reference dataset) with steady-state occupancy data from the overhead cameras 32 or other interior sensors when used. For this purpose, in some cases a door sensor 34 may be as simple as a “tripwire”, shown as a “T Door Sensor 36”, that only detects ingress/egress. Such a tripwire sensor 36 may employ low-resolution (LR) thermal sensing for example. Secondly, in small-venue scenarios where panoramic cameras 32 are not used, the door sensors 34 may be realized as TRV door sensors 38 equipped with both an LR thermal “tripwire” (pointing down at the door opening) and an LR “room view” thermal array pointed into the room, for determining both transient and steady-state phase of occupancy. Additionally, if the door sensors 34 collect only LR thermal data, they are generally suitable for privacy-sensitive areas such as restrooms etc.), (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy), (Id., ¶ 24, the local equipment controller 62 may convert the occupancy estimate 66 into a corresponding fraction of maximum occupancy, and control airflow accordingly. Thus if the occupancy is at 50% of maximum, for example, the local-area airflow is adjusted to 50% of maximum airflow. As previously indicated, the local equipment controller 62 may also communicate with the central controller 26 in support of broader system-level control);
Through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses … determine, via execution of a second machine learning model, a dynamic configuration of one or more physical resources associated with a physical building space based on the physical resource allocation.
First, Konrad discloses machine learning modeling techniques for determining occupancy of a building (Konrad, ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level).
Further, Konrad discloses dynamic configuration of resources in a building based on a resource allocation (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part).
One of ordinary skill in the art would have recognized that applying the known machine learning technique of Konrad would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the known machine learning technique of Konrad to the dynamic resource configuration step would have yielded predictable results because the level of ordinary skill in the art demonstrated by the reference applied shows the ability to incorporate such resource optimization features into similar energy efficiency systems. Further, applying the machine learning technique to the dynamic resource configuration step, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for faster and more optimal adjustments to resource allocations based on observed object movements within a building.
Thus, through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses …determine, via execution of a second machine learning model, a dynamic configuration of one or more physical resources associated with a physical building space based on the physical resource allocation.
Konrad further discloses …and dynamically configure usage of one or more physical resources associated with a physical building space based on the physical resource allocation (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part);
wherein the physical building space comprises a HVAC system, and wherein dynamically configuring the usage of the one or more physical resources comprises dynamically configuring a first zone of the HVAC system to operate and a second zone of the HVAC system to not operate (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 76, To determine the HVAC energy savings that can be achieved with the occupancy sensing system, a data-driven energy savings model based on building HVAC equipment specifications, current air supply levels, and actual building-use data obtained in the validation study. Table 3 below shows example airflow estimates that might be obtained using a Ventilation Airflow Model (VAM). While this model is representative of education and research environments in particular, many aspects of commercial office buildings are also represented in this example including offices, conference rooms, and large meeting spaces. This model includes air required as a function of both area (resulting in fixed airflow) and variable occupancy (as per ASHRAE 62.1-2013), so that the average yearly occupancy does not directly determine HVAC energy and cost reduction. This analysis indicates that airflow and HVAC energy use can be reduced by 39% if accurate occupancy data were available. In some cases depending on the exact nature and use of the building, there may be potential for even greater reduction), (Id., ¶ 19, The remaining description elaborates primarily certain structural and functional details of components involved in occupancy estimation, i.e., the sensors 16 and local-area controller 14. In typical applications today, systems are limited to a binary occupied/unoccupied decision and operation. While such operation is an improvement over older systems by reducing idle ventilation, the system described herein can extend energy savings by delivering a more fine-grained air volume control over a range of room sizes, achieving greater efficiency without sacrificing ventilation quality), (Id., ¶ 50, Fusion systems 1, 2 and 3 make use of parametric or non-parametric, linear or non-linear systems. Fusion systems 1, 2 and 3 take into account the rate at which the number of occupants in zone is changing, specifically whether it is changing rapidly (transient state) or sporadically (quasi steady state), and accordingly diminishing the influence of the boundary count or interior count, respectively, towards the estimation of the total number of occupants. [0051] 5. A system to continuously maximize energy savings for zone based on zone type or on current, recent, or historical estimates of number of occupants in zone while simultaneously not exceeding a maximum failure rate which can be specified. This is accomplished by scaling the estimate of the number of occupants in zone at each time instant by an overestimation factor greater than or equal to one based on zone type or current, recent, or historical estimates of number of occupants in zone).
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose …wherein the moving objects of the first dataset are external to the physical building space;
However, Elias discloses …wherein the moving objects of the first dataset are external to the physical building space (Elias, ¶ 2, This disclosure relates to sensing and monitoring, and more specifically to sensing and monitoring certain spaces and areas for human occupancy. This disclosure is also related to sensing and monitoring movement or changes in an environment, including movement by animals and objects. This disclosure is also related to using sensor and/or monitor information to control certain systems within commercial and residential facilities, including, but not limited to; heating, cooling, ventilation, security, lighting, power, and entertainment systems and the like. This disclosure also may be used to determine human occupancy in outdoor spaces and to control certain outdoor systems including, but not limited to; heating, cooling, ventilation, security, lighting, power, and entertainment systems and the like), (Id., ¶ 115, While this disclosure has described a sensor system 100 inside or outside a building, the sensor system 100 can be applied to other types of scenarios and other space(s) 105. For example only, the disclosed sensor systems 100 could be used to detect the presence of humans in disaster scenarios such as collapsed buildings, caves, mines and the like. In such scenarios, the sensor systems 100 could be used to determine if and how many humans are breathing and at what rate their hearts are beating. Likewise, the sensor systems 100 could be used to determine if and how many humans might be hidden in an enclosure during a hostage or kidnapping situation and may determine if and how many humans are enclosed in a container such as a shipping crate, a trucking crate, below deck on a boat, and the like. In addition to human presence, the sensor systems 100 could be used to monitor the health of humans and/or animals in an area. For example only, this disclosure could generate an output signal that is related to the breathing rate and or heartrate of any living beings within a space 105. Such sensor systems 100 could be used to monitor the breathing of babies and protect against sudden infant death syndrome. Such systems could also monitor the sleeping of people with sleep apnea and sound an alarm or adjust a bed or environmental setting if a person’s breathing becomes too erratic or stops).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad to include the vehicle pathway and public transportation elements of Elias in the analogous art of detecting occupancy using radio signals for the same reasons as stated for claim 1.
Regarding Claims 12-13, these claims recite limitations substantially similar to those in claims 3-4, respectively, and are rejected for the same reasons as stated above.
Regarding Claim 14, the combination of Konrad and Elias discloses …The system according to claim 13…
Konrad further discloses …wherein the operations further comprise: obtain data representative of the configuration of the one or more physical resources associated with the physical building space based on the second period of time (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part).
Through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses … determine, via execution of a second machine learning model, a dynamic configuration of one or more physical resources associated with a physical building space based on the physical resource allocation.
First, Konrad discloses machine learning modeling techniques for determining occupancy of a building (Konrad, ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level).
Further, Konrad discloses dynamic configuration of resources in a building based on a resource allocation (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part).
One of ordinary skill in the art would have recognized that applying the known machine learning technique of Konrad would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the known machine learning technique of Konrad to the dynamic resource configuration step would have yielded predictable results because the level of ordinary skill in the art demonstrated by the reference applied shows the ability to incorporate such resource optimization features into similar energy efficiency systems. Further, applying the machine learning technique to the dynamic resource configuration step, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for faster and more optimal adjustments to resource allocations based on observed object movements within a building.
Thus, through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses …extract a third set of feature values based on data representative of the configuration of the usage of the one or more physical resources; and training the second machine learning model based on the reference dataset; wherein the reference dataset further comprises the third set of feature values.
Regarding Claim 17, Konrad discloses … A computer program product embodied on one or more non-transitory computer readable media having stored thereon instructions that are executable by one or more processors to cause the computer program product to perform operations comprising: a first dataset indicative of a movement of objects associated with a first time period from a first computing device (Konrad, ¶ 23, Application software may be stored on a non-transitory computer-readable medium such as an optical or magnetic disk, Flash memory or other non-volatile semiconductor memory, etc., from which it is retrieved for execution by the processing circuitry, as generally known in the art), (Id., ¶ 30, Assuming that the system operates based on a given rate of occupancy estimation, such as once per minute for example, the SCNs 42 aggregate data and respond within such a time period. The sensing units preferably acquire data at a rate compatible with occupancy variations (cameras 32) or body speed (door sensors 34) (discloses obtaining movement data associated with a time period) to minimize the potential for aliasing. Since cameras 32 are responsible for steady-state occupancy data, a frame rate of about 1 Hz should be adequate. A 3.0 MPixel panoramic camera typically produces a bit rate of about 10 Mb/s for high-quality 30 Hz video using H.264/AVC compression, but this rate would drop to about 330 Kb/s at 1 Hz. Multiple cameras can be easily supported by WiFi or wired Ethernet (CATS in legacy and CAT6 in new buildings). Use of PoE, providing DC power, can additionally reduce installation costs, and is supported by CATS wiring. To assure accurate ingress/egress detection, door sensors 34 preferably sample at 10-20 Hz, but at 16×4 resolution this would result in no more than 40 Kb/s of uncompressed data rate. This rate is compatible with lower-rate communications connections such as ZigBee, although it may be preferred to use WiFi or wired Ethernet for commonality with the cameras 32), (Id., ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 22, FIG. 4 shows a local area 10 with a more structural focus, including a variable air volume (VAV) box 40 as an example of local-area equipment 12 (FIG. 1), and a shared computing node (SCN) 42 as an example of a local-area controller 14. Also shown is a separate building automation system (BAS) 44 as an example of a central controller 26 (also FIG. 1), and communications connections 46 between the SCN 42 and the sensors 32, 34 as well as the BAS 44. The connections 46 may be realized in various ways including as wireless connections (e.g., WiFi) and/or wired connections such as a Ethernet, either powered (PoE) or unpowered. The system may be realized as a standalone system (i.e., not connected to an external network or “cloud”), with one or more SCNs 42 providing data processing and fusion for multiple venues in the same control zone);
PNG
media_image1.png
231
561
media_image1.png
Greyscale
extract, via execution of a first machine learning model, a first set of feature values based on the first dataset (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
determine a density of the moving objects based on the first set of feature values (Id., ¶ 31, Another factor is the configuration or “commissioning” of a system into operation. To support a variety of venue configurations, it is preferable that algorithms be agnostic to configuration variations, e.g., camera/sensor installation height, room size and shape. In the case of human-body counting, the camera installation height and room size affect a projected body size and, therefore, call for a scale-invariant human-body detector, which is a problem considered to have been solved. In the case of crowd density estimation from a panoramic camera, every pixel contributes in some proportion to a body count but this proportion is dependent on pixel location on the sensor (e.g., a pixel in the middle of a sensor, parallel to room's floor, will occupy a smaller fraction of human head, than a pixel at sensor's periphery, due to lens properties). However, the knowledge of intrinsic camera parameters, such as sensor size and resolution, focal length, lens diameter and barrel distortion, can be used to establish a relationship between pixel location and its contribution to crowd density (pixel-to-density mapping), very much like in methods to de-warp a fisheye image for visualization. Alternatively, a pixel-to-density mapping can be obtained experimentally in a room of maximum permissible size for various installation heights and camera models, and stored in a look-up table to use during deployment, thus making a crowd density estimation algorithm agnostic to camera installation height and room size. A similar mapping can be obtained for LR thermal sensors (both “tripwire” and room-view). Additionally, some thermal sensors such as Melexis sensors are available with different lenses (40°, 60°, 120° FOVs) allowing to match them to different combinations of room height and door width), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification, and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level);
determine a physical resource allocation based on the first set of feature values and a reference dataset (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 20, FIG. 2 illustrates an aspect of the disclosed approach that can facilitate system scalability while supporting multiple occupancy-sensing modalities, for an area shown as a “unit volume” 30 such as a room. Two distinct types of sensor nodes may deployed in various combinations: interior sensors such as high-resolution (HR) panoramic overhead cameras 32 for wide-area monitoring, and boundary sensors such as low-resolution (LR) thermal sensors 34 located at doorways for ingress/egress detection. The use of panoramic cameras 32 can help minimize the number of sensors needed, thus reducing installation costs while still supporting scalability to large-size venues. The door sensors 34 may serve several roles. First, they provide transient phase data for fusion (discloses reference dataset) with steady-state occupancy data from the overhead cameras 32 or other interior sensors when used. For this purpose, in some cases a door sensor 34 may be as simple as a “tripwire”, shown as a “T Door Sensor 36”, that only detects ingress/egress. Such a tripwire sensor 36 may employ low-resolution (LR) thermal sensing for example. Secondly, in small-venue scenarios where panoramic cameras 32 are not used, the door sensors 34 may be realized as TRV door sensors 38 equipped with both an LR thermal “tripwire” (pointing down at the door opening) and an LR “room view” thermal array pointed into the room, for determining both transient and steady-state phase of occupancy. Additionally, if the door sensors 34 collect only LR thermal data, they are generally suitable for privacy-sensitive areas such as restrooms etc.), (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy), (Id., ¶ 24, the local equipment controller 62 may convert the occupancy estimate 66 into a corresponding fraction of maximum occupancy, and control airflow accordingly. Thus if the occupancy is at 50% of maximum, for example, the local-area airflow is adjusted to 50% of maximum airflow. As previously indicated, the local equipment controller 62 may also communicate with the central controller 26 in support of broader system-level control);
Through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses … determine, via execution of a second machine learning model, a dynamic configuration of one or more physical resources associated with a physical building space based on the physical resource allocation.
First, Konrad discloses machine learning modeling techniques for determining occupancy of a building (Konrad, ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features (discloses extracting features based on the movement dataset) for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 32, at the time of commissioning a system the camera/sensor installation height needs to be provided. Alternatively, a precise calibration pattern can be placed directly under a camera/sensor and a self-calibration operation is performed. With the installation height known, a corresponding pixel-to-density map can be used algorithmically to provide accurate occupancy estimates. The system may employ data-driven or “machine-learning” methods which can provide robustness against real-world variability without a physical model, but it is preferred that such methods be kept simple and not require re-training in new environments. In some cases machine learning is used only for offline training of counting and fusion algorithms, and only a system is fine-tuned in real time through self-calibration to a new environment by setting certain global parameters, e.g., room height, spacing between units, etc), (Id., ¶ 37, Two known approaches to estimating occupancy level are (1) detecting and then counting human bodies, and (2) estimating number based on detected changes in a camera field of view (FOV). Recent occupancy sensing methods via human-body counting include: full-body detection using Haar features and ADABOOST, head counting using Harr or HOG (Histogram of Gradients) features and SVM classification (discloses extracting features of the movement dataset using machine learning), and head counting using Convolutional Neural Networks (CNNs). These methods show great robustness to variations in body size and orientation. Shallow CNNs may suffice (for body/non-body binary output) and could run on a low-power mobile platform. As for crowd-density estimation, algorithms are known that are based on image gradient changes followed by SVM, full-image CNNs, and a wealth of approaches at pixel, texture or object level).
Further, Konrad discloses dynamic configuration of resources in a building based on a resource allocation (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part).
One of ordinary skill in the art would have recognized that applying the known machine learning technique of Konrad would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the known machine learning technique of Konrad to the dynamic resource configuration step would have yielded predictable results because the level of ordinary skill in the art demonstrated by the reference applied shows the ability to incorporate such resource optimization features into similar energy efficiency systems. Further, applying the machine learning technique to the dynamic resource configuration step, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for faster and more optimal adjustments to resource allocations based on observed object movements within a building.
Thus, through KSR Rationale D (See MPEP 2141(III)(D)), Konrad discloses …determine, via execution of a second machine learning model, a dynamic configuration of one or more physical resources associated with a physical building space based on the physical resource allocation.
Konrad further discloses …and dynamically configuring usage of one or more physical resources associated with a physical building space based on the physical resource allocation (Konrad, ¶ 34, To estimate the (quasi) steady-state occupancy, in one example panoramic, overhead, high-resolution, low-cost CMOS cameras are used, which provide a wide field of view with minimal occlusions, while also being widely available and relatively inexpensive. OSSY preferably employs accurate, real-time (at HVAC time scale) algorithms for occupant counting using panoramic video frames. A fundamental block in many occupancy sensing algorithms is change detection (also referred to as background subtraction), which identifies areas of a video frame that have changed in relation to some background model, e.g., view of an empty room), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 16, FIG. 1 is a general block diagram of an HVAC system employing occupancy sensing, i.e., explicitly estimating the number of people in a local area 10 and adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation). In practice, a local area 10 may be a single room or a collection of rooms, and the room(s) may be small or large. In some cases a local area 10 may correspond to part or all of a “zone” as that term is conventionally understood in HVAC systems, while in other cases it may span multiple zones in whole or in part);
wherein the physical building space comprises a HVAC system, and wherein dynamically configuring the usage of the one or more physical resources comprises dynamically configuring a first zone of the HVAC system to operate and a second zone of the HVAC system to not operate (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate. The controller generally includes one or more fusion systems collectively generating the occupancy estimate by corresponding fusion calculations, the fusion systems including a first fusion system producing a boundary occupancy-count change based on sensor signals from the boundary sensors, a second fusion system producing an interior occupancy count based on sensor signals from the interior sensors, and a third fusion system producing the occupancy estimate based on one or both of the boundary occupancy-count change and the interior occupancy count. Fusion may be of one or multiple types including cross-modality fusion across different sensor types, within-modality fusion across different instances of same-type sensors, and cross-algorithm fusion using different algorithms to generates respective estimates for the same sensor(s). Use of the occupancy sensing system can help to deliver desired occupancy-sensitive performance of the HVAC system, specifically the attainment of a desired energy savings without an undue incidence of undesirable under-ventilation), (Id., ¶ 76, To determine the HVAC energy savings that can be achieved with the occupancy sensing system, a data-driven energy savings model based on building HVAC equipment specifications, current air supply levels, and actual building-use data obtained in the validation study. Table 3 below shows example airflow estimates that might be obtained using a Ventilation Airflow Model (VAM). While this model is representative of education and research environments in particular, many aspects of commercial office buildings are also represented in this example including offices, conference rooms, and large meeting spaces. This model includes air required as a function of both area (resulting in fixed airflow) and variable occupancy (as per ASHRAE 62.1-2013), so that the average yearly occupancy does not directly determine HVAC energy and cost reduction. This analysis indicates that airflow and HVAC energy use can be reduced by 39% if accurate occupancy data were available. In some cases depending on the exact nature and use of the building, there may be potential for even greater reduction), (Id., ¶ 19, The remaining description elaborates primarily certain structural and functional details of components involved in occupancy estimation, i.e., the sensors 16 and local-area controller 14. In typical applications today, systems are limited to a binary occupied/unoccupied decision and operation. While such operation is an improvement over older systems by reducing idle ventilation, the system described herein can extend energy savings by delivering a more fine-grained air volume control over a range of room sizes, achieving greater efficiency without sacrificing ventilation quality), (Id., ¶ 50, Fusion systems 1, 2 and 3 make use of parametric or non-parametric, linear or non-linear systems. Fusion systems 1, 2 and 3 take into account the rate at which the number of occupants in zone is changing, specifically whether it is changing rapidly (transient state) or sporadically (quasi steady state), and accordingly diminishing the influence of the boundary count or interior count, respectively, towards the estimation of the total number of occupants. [0051] 5. A system to continuously maximize energy savings for zone based on zone type or on current, recent, or historical estimates of number of occupants in zone while simultaneously not exceeding a maximum failure rate which can be specified. This is accomplished by scaling the estimate of the number of occupants in zone at each time instant by an overestimation factor greater than or equal to one based on zone type or current, recent, or historical estimates of number of occupants in zone).
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose …wherein the moving objects of the first dataset are external to the physical building space;
However, Elias discloses …wherein the moving objects of the first dataset are external to the physical building space (Elias, ¶ 2, This disclosure relates to sensing and monitoring, and more specifically to sensing and monitoring certain spaces and areas for human occupancy. This disclosure is also related to sensing and monitoring movement or changes in an environment, including movement by animals and objects. This disclosure is also related to using sensor and/or monitor information to control certain systems within commercial and residential facilities, including, but not limited to; heating, cooling, ventilation, security, lighting, power, and entertainment systems and the like. This disclosure also may be used to determine human occupancy in outdoor spaces and to control certain outdoor systems including, but not limited to; heating, cooling, ventilation, security, lighting, power, and entertainment systems and the like), (Id., ¶ 115, While this disclosure has described a sensor system 100 inside or outside a building, the sensor system 100 can be applied to other types of scenarios and other space(s) 105. For example only, the disclosed sensor systems 100 could be used to detect the presence of humans in disaster scenarios such as collapsed buildings, caves, mines and the like. In such scenarios, the sensor systems 100 could be used to determine if and how many humans are breathing and at what rate their hearts are beating. Likewise, the sensor systems 100 could be used to determine if and how many humans might be hidden in an enclosure during a hostage or kidnapping situation and may determine if and how many humans are enclosed in a container such as a shipping crate, a trucking crate, below deck on a boat, and the like. In addition to human presence, the sensor systems 100 could be used to monitor the health of humans and/or animals in an area. For example only, this disclosure could generate an output signal that is related to the breathing rate and or heartrate of any living beings within a space 105. Such sensor systems 100 could be used to monitor the breathing of babies and protect against sudden infant death syndrome. Such systems could also monitor the sleeping of people with sleep apnea and sound an alarm or adjust a bed or environmental setting if a person’s breathing becomes too erratic or stops).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad to include the vehicle pathway and public transportation elements of Elias in the analogous art of detecting occupancy using radio signals for the same reasons as stated for claim 1.
Regarding Claim 18, this claim recites limitations substantially similar to those in claim 6, and is rejected for the same reasons as stated above.
Claims 9-10, 15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Konrad in view of Elias and in further view of Albonesi et al., U.S. Publication No. 2016/0051469 [hereinafter Albonesi].
Regarding Claim 9, the combination of Konrad and Elias discloses …The computer-implemented method according to claim 1…
While suggested in at least Fig. 2 and related texts of Konrad, the combination of Konrad and Elias does not explicitly disclose … wherein the first dataset comprises one or more images captured by one or more recording devices, wherein the computer-implemented method further comprises: obtaining, by the computer system from a second computing device, text data indicative of an increase or decrease to the physical resource allocation associated with the first time period; and extracting, by the computer system and via execution of the first machine learning model, a third set of feature values based on the text data; wherein determining the physical resource allocation is further based on the third set of feature values.
However, Albonesi discloses … wherein the first dataset comprises one or more images captured by one or more recording devices, wherein the computer-implemented method further comprises: obtaining, by the computer system from a second computing device, text data indicative of an increase or decrease to the physical resource allocation associated with the first time period (Albonesi, ¶ 108, Level 4 can include scheduling and behavioral nudges. Two examples include use of space heater to further drop the nighttime setback. In one example, this has been implemented on a user's house and realized about $100/month annual savings. This could be augmented by adding additional sensors, e.g., off the shelf, Nest Thermostats can be deployed throughout a house to sense multiple locations. Another exemplary component can include behavioral nudges where changes in planning/scheduling/behavior to save energy can be suggested. Examples:), (Id., ¶ 109, “If you set your thermostat back 1 degree, you will save $Z per day. Is this okay?”), (Id., ¶ 110, “Can we learn from your preferences? We will occasionally set back your thermostat slightly and observe whether you override this by turning it back up. We will also ask if you are comfortable via text message (discloses text data indicative of a decrease to the resource allocation) and use that to develop algorithms.”);
and extracting, by the computer system and via execution of the first machine learning model, a third set of feature values based on the text data; wherein determining the physical resource allocation is further based on the third set of feature values (Id., ¶ 54, The exemplary Resource Scheduling and Nudges layer at Level 4 mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to Level 3 to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly. It achieves long term power savings through energy-aware assignment of schedulable building resources such as meeting rooms in an office building), (Id., ¶ 89, Learning: The information collected can be used to as feedback to improve operations of the system over time, e.g., by building response surfaces with Radial Basis Functions, (discloses extracting features value) which are a type of spline. The control systems can be operated for a long time, and all the data can be saved that is collected about the values of K.sub.c(P.sub.t, t, X.sub.t, w.sub.t) in (2) and of the future value function F.sub.t(X.sub.t). For example over the course of a year, K.sub.c(P.sub.t, t, X.sub.t, w.sub.t) can be solved over 8700 times, and a multivariate response surface can be built from that information, and thereby reduce the number of times a relatively expensive simulation model, e.g., like Energy Plus, needs to be run), (Id., ¶ 90, Comparison to other methods: In the last decade, an increasing number of papers have demonstrated receding horizon optimal control framework of building systems, commonly referred to as Model Predictive Control (MPC). However there exist significant challenges with respect to the scalability of such an approach in terms of computation and implementation especially for stochastic analysis, which is necessary to incorporate uncertainty of prices, occupancy, and weather, for example…), (Id., ¶ 94, While the Power Manager makes changes in power states based on a short time horizon, the Scheduler at the top of the BPMS identifies long term power savings through data analysis and interaction with the Power Manager and building occupants (FIG. 1). The Scheduler mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to the Power Manager to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly, e.g., through accurate cost savings estimates of lowering room temperature, occupants may opt for lower thermostat settings. Identifying effective occupant “nudges” is part of the deployment plan, which is described in the next section of this patent document).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad and the outdoor elements of Elias to include the text data elements of Albonesi in the analogous art of building power management systems.
The motivation for doing so would have been to improve an ability “to reduce building energy consumption and dramatically improve building energy efficiencies” (Albonesi, ¶ 20), wherein such improvements would benefit Elias’ method which seeks to improve an ability to “adjust the HVAC system to the appropriate level for the unoccupied or under-occupied conditions” (Elias, ¶ 3), and wherein such improvements would further benefit Konrad’s method which enables “adjusting the operation of the HVAC system accordingly, to provide heating or cooling both sufficiently (i.e., meeting standards of temperature regulation and adequate ventilation, based on occupancy) and efficiently (i.e., using only an appropriate proportion of maximum ventilation capacity and avoiding wasteful over-ventilation)” [Albonesi, ¶ 20; Elias, ¶ 3; Konrad, ¶ 16].
Regarding Claim 10, the combination of Konrad, Elias and Albonesi discloses…The computer-implemented method according to claim 9…
While suggested in at least Fig. 2 and related texts, Konrad does not explicitly disclose … wherein extracting the third set of feature values from the text data comprises: identifying, by the computer system and the machine learning model, one or more key terms in the text data, determining, by the computer system, a second set of characteristics based on the key terms and the text data, and deriving, by the computer system, an indication corresponding to the increase or decrease to the physical resource allocation based on the third set of feature values and the second set of characteristics.
However, Albonesi discloses …wherein extracting the third set of feature values from the text data comprises: identifying, by the computer system and the machine learning model, one or more key terms in the text data, determining, by the computer system, a second set of characteristics based on the key terms and the text data, and deriving, by the computer system, an indication corresponding to the increase or decrease to the physical resource allocation based on the third set of feature values and the second set of characteristics (Albonesi, ¶ 108, Level 4 can include scheduling and behavioral nudges. Two examples include use of space heater to further drop the nighttime setback. In one example, this has been implemented on a user's house and realized about $100/month annual savings. This could be augmented by adding additional sensors, e.g., off the shelf, Nest Thermostats can be deployed throughout a house to sense multiple locations. Another exemplary component can include behavioral nudges where changes in planning/scheduling/behavior to save energy can be suggested. Examples:), (Id., ¶ 109, “If you set your thermostat back 1 degree, you will save $Z per day. Is this okay?”), (Id., ¶ 110, “Can we learn from your preferences? We will occasionally set back your thermostat slightly and observe whether you override this by turning it back up. We will also ask if you are comfortable via text message (discloses text data indicative of a decrease to the resource allocation) and use that to develop algorithms.”), (Id., ¶ 54, The exemplary Resource Scheduling and Nudges layer at Level 4 mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to Level 3 to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly. It achieves long term power savings through energy-aware assignment of schedulable building resources such as meeting rooms in an office building), (Id., ¶ 89, Learning: The information collected can be used to as feedback to improve operations of the system over time, e.g., by building response surfaces with Radial Basis Functions, (discloses extracting features value) which are a type of spline. The control systems can be operated for a long time, and all the data can be saved that is collected about the values of K.sub.c(P.sub.t, t, X.sub.t, w.sub.t) in (2) and of the future value function F.sub.t(X.sub.t). For example over the course of a year, K.sub.c(P.sub.t, t, X.sub.t, w.sub.t) can be solved over 8700 times, and a multivariate response surface can be built from that information, and thereby reduce the number of times a relatively expensive simulation model, e.g., like Energy Plus, needs to be run), (Id., ¶ 90, Comparison to other methods: In the last decade, an increasing number of papers have demonstrated receding horizon optimal control framework of building systems, commonly referred to as Model Predictive Control (MPC). However there exist significant challenges with respect to the scalability of such an approach in terms of computation and implementation especially for stochastic analysis, which is necessary to incorporate uncertainty of prices, occupancy, and weather, for example…), (Id., ¶ 94, While the Power Manager makes changes in power states based on a short time horizon, the Scheduler at the top of the BPMS identifies long term power savings through data analysis and interaction with the Power Manager and building occupants (FIG. 1). The Scheduler mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to the Power Manager to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly, e.g., through accurate cost savings estimates of lowering room temperature, occupants may opt for lower thermostat settings. Identifying effective occupant “nudges” is part of the deployment plan, which is described in the next section of this patent document).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad and the outdoor elements of Elias to include the text data elements of Albonesi in the analogous art of building power management systems for the same reasons as stated for claim 1.
Regarding Claim 15, the combination of Konrad and Elias discloses …The system according to claim 11…
While suggested in at least Fig. 2 and related texts of Konrad, the combination of Konrad and Elias does not explicitly disclose …further comprising a building management system, wherein the building management system comprises an electrical distribution system, and wherein dynamically configuring usage of the one or more physical resources associated with the physical building space within the second time period causes the system to further perform operations comprising: configure the electrical distribution system to reduce a power consumption based on the physical resource allocation.
However, Albonesi discloses …further comprising a building management system, wherein the building management system comprises an electrical distribution system, and wherein dynamically configuring usage of the one or more physical resources associated with the physical building space within the second time period causes the system to further perform operations comprising: configure the electrical distribution system to reduce a power consumption based on the physical resource allocation (Albonesi, ¶ 3, Heating, ventilation, and air conditioning (HVAC) technologies can be used to provide systems, devices, and methods for controlling conditions of buildings to meet certain comfort needs and other specific needs in using or managing HVAC controlled buildings. HVAC system design and engineering are generally based on the principles of various technical fields including, thermodynamics, fluid mechanics, heat transfer, electricity power management and others…), (Id., ¶ 5, The disclosed integrated building power management system includes a hierarchical computer control software architecture for managing building power using a cyber-physical system. In some implementations, the architecture of the integrated building power management system (discloses electrical distribution system) is arranged in a hierarchy of control levels of hardware and software systems, also referred to herein as ‘stacks’. For example, e.g., the system levels can be controlled using software hierarchical layers that receive information from, and perform various decision making processes for controlling electrical power distribution and consumption at, various locations and appliances in buildings based on a computer controlled network of sensors and power control devices in the buildings to enable dynamic power management based on real time power needs to provide energy efficient electrical power systems for buildings. (discloses dynamically configuring usage of resources)), (Id., ¶ 22, Building occupants demand comfort at their particular sections of a large HVAC controlled building without considering the overall HVAC operations of the building, including power efficiency, operating cost or systems limitations. In such systems, building operators tediously tune building operational parameters using simple control rules to meet the demands of building occupant demands or requirements. Power control levers, such as night setbacks to conserve power, (discloses reducing power consumption based on resource allocation) are typically statically set, coarse-grain, and conservative. At the sensor and actuator level, legacy building systems are incompatible with the temporal and spatial scales of real time building events, and are unable to scale between large and small building environments. Recent proposals for adapting building systems to weather and occupant behavior are hindered by these hardware limitations. The building and its HVAC system may have been built with little forethought of the constraints that would ultimately be placed on how the building could be operated).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad and the outdoor elements of Elias to include the power management elements of Albonesi in the analogous art of building power management systems for the same reasons as stated for claim 1.
Regarding Claim 19, the combination of Konrad and Elias discloses …The computer program product according to claim 18…
While suggested in at least Fig. 2 and related texts of Konrad, the combination of Konrad and Elias does not explicitly disclose … the computer program product further performs operations comprising: obtain text data indicative of an increase or a decrease to the physical resource allocation associated with the first time period from a second computing device; identify, by the first machine learning model, one or more key terms in the text data; determine a second set of characteristics based on the one or more key terms; extract, via execution of the first machine learning model, a third set of feature values based on the text data and the second set of characteristics; and derive an indication of the increase of the decrease to the physical resource allocation based on the third set of feature values and the second set of characteristics; wherein determining the physical resource allocation is further based on the third set of feature values.
However, Albonesi discloses …the computer program product further performs operations comprising: obtain text data indicative of an increase or a decrease to the physical resource allocation associated with the first time period from a second computing device; identify, by the first machine learning model, one or more key terms in the text data; determine a second set of characteristics based on the one or more key terms (Albonesi, ¶ 108, Level 4 can include scheduling and behavioral nudges. Two examples include use of space heater to further drop the nighttime setback. In one example, this has been implemented on a user's house and realized about $100/month annual savings. This could be augmented by adding additional sensors, e.g., off the shelf, Nest Thermostats can be deployed throughout a house to sense multiple locations. Another exemplary component can include behavioral nudges where changes in planning/scheduling/behavior to save energy can be suggested. Examples:), (Id., ¶ 109, “If you set your thermostat back 1 degree, you will save $Z per day. Is this okay?”), (Id., ¶ 110, “Can we learn from your preferences? We will occasionally set back your thermostat slightly and observe whether you override this by turning it back up. We will also ask if you are comfortable via text message (discloses text data indicative of a decrease to the resource allocation) and use that to develop algorithms.”), (Id., ¶ 94, While the Power Manager makes changes in power states based on a short time horizon, the Scheduler at the top of the BPMS identifies long term power savings through data analysis and interaction with the Power Manager and building occupants (FIG. 1). The Scheduler mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to the Power Manager to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly, e.g., through accurate cost savings estimates of lowering room temperature, occupants may opt for lower thermostat settings. Identifying effective occupant “nudges” is part of the deployment plan, (discloses second set of characteristics) which is described in the next section of this patent document);
extract, via execution of the first machine learning model, a third set of feature values based on the text data and the second set of characteristics; and derive an indication of the increase of the decrease to the physical resource allocation based on the third set of feature values and the second set of characteristics; wherein determining the physical resource allocation is further based on the third set of feature values. (Id., ¶ 54, The exemplary Resource Scheduling and Nudges layer at Level 4 mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to Level 3 to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly. It achieves long term power savings through energy-aware assignment of schedulable building resources such as meeting rooms in an office building), (Id., ¶ 89, Learning: The information collected can be used to as feedback to improve operations of the system over time, e.g., by building response surfaces with Radial Basis Functions, (discloses extracting third feature values) which are a type of spline. The control systems can be operated for a long time, and all the data can be saved that is collected about the values of K.sub.c(P.sub.t, t, X.sub.t, w.sub.t) in (2) and of the future value function F.sub.t(X.sub.t). For example over the course of a year, K.sub.c(P.sub.t, t, X.sub.t, w.sub.t) can be solved over 8700 times, and a multivariate response surface can be built from that information, and thereby reduce the number of times a relatively expensive simulation model, e.g., like Energy Plus, needs to be run), (Id., ¶ 90, Comparison to other methods: In the last decade, an increasing number of papers have demonstrated receding horizon optimal control framework of building systems, commonly referred to as Model Predictive Control (MPC). However there exist significant challenges with respect to the scalability of such an approach in terms of computation and implementation especially for stochastic analysis, which is necessary to incorporate uncertainty of prices, occupancy, and weather, for example…), (Id., ¶ 94, While the Power Manager makes changes in power states based on a short time horizon, the Scheduler at the top of the BPMS identifies long term power savings through data analysis and interaction with the Power Manager and building occupants (FIG. 1). The Scheduler mines the filtered sensor data from the Level 2 interface to improve energy efficiency. It discovers trends that are passed to the Power Manager to improve its optimization. It learns preferences from building occupants and “nudges” them to take actions that are more energy-friendly, e.g., through accurate cost savings estimates of lowering room temperature, occupants may opt for lower thermostat settings. Identifying effective occupant “nudges” is part of the deployment plan, which is described in the next section of this patent document).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad and the outdoor elements of Elias to include the text data elements of Albonesi in the analogous art of building power management systems for the same reasons as stated for claim 1.
Regarding Claim 20, the combination of Konrad and Elias discloses …The computer program product according to claim 17…
Konrad further discloses … wherein dynamically configuring usage of the one or more physical resources associated with the physical building space based on the physical resource allocation causes the computer program product to further perform operations comprising: …and allocate a plurality of workstations associated with the physical building space based on a ranking of each workstation of the plurality of workstations allocation (Konrad, ¶ 1, The present disclosure is related to the field of systems whose operation or performance depends on the level or pattern of occupancy, such as heating, ventilation, and air conditioning (HVAC) systems of buildings for example. More particularly, the disclosure relates to techniques for tailoring operation of such systems to estimated occupancy levels or patterns, to provide energy savings, comfort, safety, or other system performance benefits), (Id., ¶ 4, sensing and control apparatus are disclosed for use in an HVAC system of a building. The apparatus includes a plurality of sensors including interior sensors and boundary sensors, the sensors generating respective sensor signals conveying occupancy-related features for an area of the building. In one example the sensors include cameras in interior areas and low-resolution thermal sensors at ingress/egress points. The occupancy-related features may be specific aspects of camera images, or signal levels from the thermal sensors, that can be processed to arrive at an estimate of occupancy. The apparatus further includes a controller configured and operative in response to the sensor signals to produce an occupancy estimate for the area and to generate equipment-control signals to cause the HVAC system to supply conditioned air to the area based on the occupancy estimate), (Id., ¶ 13, An Occupancy Sensing SYstem (OSSY) generates an estimate of the number of occupants in an area of a building, and uses the estimate for system purposes such as adjusting a rate of ventilation air flow to be tailored for the estimated occupancy. In some applications the building may be a commercial venue and include for example offices, conference rooms, large classrooms or conference rooms, and very large colloquium rooms. The system may be used with a variety of other building times. The system is inherently scalable to support a wide range of room sizes, from small offices to large meeting halls. This is a byproduct of a modular architecture enabling the addition of new units and seamlessly fusing their occupancy estimates with existing ones, thereby expanding coverage. The system can deliver robust performance by fusing information from multiple sensor modalities (e.g., wide-area, overhead sensing using panoramic cameras and local, entryway sensing using low-resolution thermal sensors) and from different algorithms (e.g., body counting versus crowd-density estimation). The system can be privacy-adaptive, using entryway sensors that collect only low-resolution, thermal data, facilitating deployment in bathrooms, changing rooms, etc. It may also be cost-effective by minimizing the number of sensors needed and, therefore, the cost of installation), (Id., ¶ 20, FIG. 2 illustrates an aspect of the disclosed approach that can facilitate system scalability while supporting multiple occupancy-sensing modalities, for an area shown as a “unit volume” 30 such as a room. Two distinct types of sensor nodes may deployed in various combinations: interior sensors such as high-resolution (HR) panoramic overhead cameras 32 for wide-area monitoring, and boundary sensors such as low-resolution (LR) thermal sensors 34 located at doorways for ingress/egress detection. The use of panoramic cameras 32 can help minimize the number of sensors needed, thus reducing installation costs while still supporting scalability to large-size venues), (Id., ¶ 81, OSSY occupancy data can be used to determine what areas of the building are occupied, as well as occupant density and numbers, and this information can be provided to internal building systems and emergency responders. For example, this data can be used to enable/prevent access to different parts of the building through closing different door systems electromechanically. The data can also be used to trigger different lighting systems or audio systems to advise building occupants as to the appropriate action in different spaces of the buildings, (discloses allocating workspaces within a building) or to indicate to safety/security staff the locations of occupants in the building. For these applications, aggregated OSSY data can be sent to a security computer system and that data either presented to security staff in aggregated manner, and/or have actions taken in terms of access control, lighting signals, or information systems being implemented in an automatic fashion), (Id., ¶ 82, In terms of space utilization applications, OSSY may send data for different spaces to a central computer or web based system that aggregates the data and provides different utilization metrics such as capacity utilization that could be temporally disaggregated. In one application, this can provide information to a real-time scheduling system that can indicate to building managers and occupants which spaces are currently being utilized, or are expected to be utilized in a specified time range. In another application, this system can indicate to building managers how well their space is currently configured and used in terms of capacity utilization and temporally. Furthermore, information as to the building cost (rental, operational, etc.) for different areas can be combined to provide a more explicit evaluation of cost performance), (Id., Table 2, table indicates occupancy rankings for workstations within a building).
PNG
media_image2.png
178
399
media_image2.png
Greyscale
While suggested in at least Fig. 2 and related texts of Konrad, the combination of Konrad and Elias does not explicitly disclose … control an electrical distribution in the physical building space to conserve a power usage based on the physical resource allocation…
However, Albonesi discloses …further comprising a building management system, wherein the building management system comprises an electrical distribution system, and wherein dynamically configuring usage of the one or more physical resources associated with the physical building space within the second time period causes the system to further perform operations comprising: configure the electrical distribution system to reduce a power consumption based on the physical resource allocation (Albonesi, ¶ 3, Heating, ventilation, and air conditioning (HVAC) technologies can be used to provide systems, devices, and methods for controlling conditions of buildings to meet certain comfort needs and other specific needs in using or managing HVAC controlled buildings. HVAC system design and engineering are generally based on the principles of various technical fields including, thermodynamics, fluid mechanics, heat transfer, electricity power management and others…), (Id., ¶ 5, The disclosed integrated building power management system includes a hierarchical computer control software architecture for managing building power using a cyber-physical system. In some implementations, the architecture of the integrated building power management system (discloses electrical distribution system) is arranged in a hierarchy of control levels of hardware and software systems, also referred to herein as ‘stacks’. For example, e.g., the system levels can be controlled using software hierarchical layers that receive information from, and perform various decision making processes for controlling electrical power distribution and consumption at, various locations and appliances in buildings based on a computer controlled network of sensors and power control devices in the buildings to enable dynamic power management based on real time power needs to provide energy efficient electrical power systems for buildings. (discloses dynamically configuring usage of resources)), (Id., ¶ 22, Building occupants demand comfort at their particular sections of a large HVAC controlled building without considering the overall HVAC operations of the building, including power efficiency, operating cost or systems limitations. In such systems, building operators tediously tune building operational parameters using simple control rules to meet the demands of building occupant demands or requirements. Power control levers, such as night setbacks to conserve power, (discloses reducing power consumption based on resource allocation) are typically statically set, coarse-grain, and conservative. At the sensor and actuator level, legacy building systems are incompatible with the temporal and spatial scales of real time building events, and are unable to scale between large and small building environments. Recent proposals for adapting building systems to weather and occupant behavior are hindered by these hardware limitations. The building and its HVAC system may have been built with little forethought of the constraints that would ultimately be placed on how the building could be operated).
It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified the resource allocation and machine learning elements of Konrad and the outdoor elements of Elias to include the power management elements of Albonesi in the analogous art of building power management systems for the same reasons as stated for claim 1.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Pita et al., U.S. Publication No. 2023/0142105 discloses machine-learning-based prediction of construction project parameters.
Stafanski et al., U.S. Publication No. 2015/0168003 discloses systems and methods for signature-based thermostat control.
Witbeck et al., U.S. Publication No. 2012/0217315 discloses a system for controlling temperatures of multiple zones in multiple structures.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS D BOLEN whose telephone number is (408)918-7631. The examiner can normally be reached Monday - Friday 8:00 AM - 5:00 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patty Munson can be reached on (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS D BOLEN/Examiner, Art Unit 3624
/PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624