Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 1/30/2026 was filed after the mailing date of the application on 9/16/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 5/22/2025 was filed after the mailing date of the application on 9/16/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 12/20/2024 was filed after the mailing date of the application on 9/16/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Double Patenting
Claims 1-7, 9-15 and 17-20 are rejected on the ground of non statutory double patenting as being unpatentable over claims 1-7, 9-16 and 18-20 of U.S. Patent No. 11792039. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations in each claim set relate to the same concept.
18/886,250
17/345,817
Claim 1: A building system, comprising:
one or more memory devices storing instructions thereon, that, when executed by one or more processors, cause the one or more processors to:
retrieve at least one node or at least one edge of a space graph, the space graph
comprising a plurality of nodes representing a plurality of entities of a building and a plurality of edges between the plurality of nodes representing a plurality of relationships between the
plurality of entities of the building;
execute, using at least one of the at least one node or the at least one edge, a
machine learning model to classify a state of an entity of the plurality of entities of the building; and
update a current state of the entity using the classified state.
Claim 2: The building system of claim 1, wherein the instructions cause the one or more processors to:
receive building data from one or more building data sources;
generate the plurality of relationships between the plurality of nodes based on the building data, wherein the plurality of relationships comprises a pair of relationships between the first entity and the second entity of the plurality of entities representing two different types of
relationships, wherein the pair of relationships comprises a first relationship between the first entity and the second entity and a second relationship between the second entity and the first
entity; and
update the graph data structure by causing the graph data structure to store the plurality of nodes representing the plurality of entities and the plurality of edges between the plurality of
nodes representing the plurality of relationships.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Rao (US Patent Publication 2018/0018502).
As per claims 1, 13 and 19: Rao discloses one or more memory devices storing instructions thereon, that, when executed by one or more processors, cause the one or more processors to (see abstract):
retrieve at least one node or at least one edge of a space graph, the space graph comprising a plurality of nodes representing a plurality of entities of a building and a plurality of edges between the plurality of nodes representing a plurality of relationships between the plurality of entities of the building (Paragraph 32; the reference mobile device 10 maps various installed devices of interest (or assets) such as fire protection and suppression devices (for example smoke detectors, sprinklers etc.), surveillance cameras, HVAC equipment and other building management system installed devices of interest with-in a “built environment.” A “built environment” is used to delineate a physical structure, such as a building that is built, i.e., constructed, and that has in place various installed devices of interest (assets).);
execute, using at least one of the at least one node or the at least one edge, a machine learning model to classify a state of an entity of the plurality of entities of the building (Paragraph 39; The reference device models, which could be images or 3D data models or BIM data, may be input from the bill of materials used in the building. The process 40 includes loading 42 RGB/RGB-D (three color+one depth) image/point cloud data set of a scene, choosing 44 interest points and compute scene feature descriptors (pose/scale invariant features), and comparing 46 scene feature descriptors to descriptors of 3D models retrieved from a database 45 of 3D models, using either machine learning and/or non-machine learning methods. The process 40 also includes indicating 48 objects recognized in the scene); and
update a current state of the entity using the classified state (Paragraph 34).
As per claim 2: The building system of claim 1, wherein the instructions cause the one or more processors to:
receive building data from one or more building data sources (Paragraph3; Building information modelling (BIM) involves producing digital representations of physical and functional characteristics of such environments that are represented in BIM data. Building information modelling often assumes the availability of the location of the devices (assets) as part of BIM data);
generate the plurality of edges between the plurality of nodes based on the building data, wherein the plurality of edges comprises a pair of edges between a first node representing the entity and a second node representing a second entity of the plurality of entities representing two different types of relationships, wherein the pair of edges comprises a first edge between the first node and the second node and a second edge between the second node and the first node (Paragraph 5; captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment); and
update the space graph by causing the space graph to store the plurality of nodes representing the plurality of entities and the plurality of edges between the plurality of nodes representing the plurality of relationships (Paragraph 34; updating a map of an unknown environment while simultaneously keeping track of an agent's location within that environment).
As per claims 3 and 14: The building system of claim 1, wherein the instructions cause the one or more processors to:
ingest data values into the space graph, the data values associated with the plurality of entities (Paragraph 31; the asset identification is “map based.” However, the asset identification need not be a “map based” process. The processor 24 executes several other functions such as mapping of assets, navigation which is described below); and
execute, using at least a portion of the data values, the machine learning model to classify the state of the entity of the plurality of entities of the building (Paragraph 39).
As per claims 4: The building system of claim 1, wherein the instructions cause the one or more processors to:
receive new building data from one or more building data sources (Paragraph3; Building information modelling (BIM) involves producing digital representations of physical and functional characteristics of such environments that are represented in BIM data. Building information modelling often assumes the availability of the location of the devices (assets) as part of BIM data);
identify, based on the new building data, a new relationship between the entity and a second entity of the plurality of entities (Paragraph 38; execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping; and integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment); and
update the space graph with the new relationship by causing the space graph to store a new edge between a first node of the plurality of nodes representing the entity and a second node of the plurality of nodes representing the second entity (Paragraph 32; updating a map of an unknown environment while simultaneously keeping track of an agent's location within that environment).
As per claim 5: The building system of claim 1, wherein the plurality of nodes include a node representing a control algorithm (Paragraph 38; In one of the approaches of object recognition, algorithms are executed in order to identify the device type automatically and map the location of the installed devices of interest in the 3D model);
wherein the plurality of edges include one or more particular edges between the node and a node of the plurality of nodes representing the machine learning model, the one or more particular edges indicating that the machine learning model operates based on the control algorithm (Paragraph 5; captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment).
As per claims 6 and 15: The building system of claim 1, wherein the machine learning model is an artificial intelligence agent that performs artificial intelligence operations for at least one of a person, place, or piece of equipment of the building (Paragraph 38, 67; automatic learning rules from the representation of built environment overcomes limitations of manually producing the rules).
As per claim 7: The building system of claim 1, wherein the machine learning model executes based on at least a portion of the plurality of nodes and a portion of the plurality of edges (Paragraph 34).
As per claims 8, 16 and 20: The building system of claim 1, wherein the entity is a space of the building (Paragraph 3-4);
wherein the instructions cause the one or more processors to classify the space as occupied or unoccupied (Paragraph 52; The application programs access the device objects through the services associated with the objects. The services might differ based on device categories).
As per claim 9: The building system of claim 1, wherein the plurality of nodes include a second node representing one or more operating settings for the entity; wherein the plurality of edges include one or more particular edges between the second node and a node of the machine learning model indicating that the machine learning model generates the one or more operating settings (Paragraph 39; The process 40 includes loading 42 RGB/RGB-D (three color+one depth) image/point cloud data set of a scene, choosing 44 interest points and compute scene feature descriptors (pose/scale invariant features), and comparing 46 scene feature descriptors to descriptors of 3D models retrieved from a database 45 of 3D models, using either machine learning and/or non-machine learning methods).
As per claim 10: The building system of claim 9, wherein a fourth node of the plurality of nodes is linked by a particular edge of the plurality of edges to another node of the plurality of nodes that represents a device that operates based on the one or more operating settings (Paragraph 31; The processor 24 in the reference mobile device 10 executes an asset identification process 40 (FIG. 2) that provides an efficient mechanism to represent assets on 3D or 2D maps and subsequently make use of the location information for location based interactive applications. Servers (not shown) could execute portions of or all of the process 40. In some implementations the asset identification is “map based.” However, the asset identification need not be a “map based” process).
As per claims 11 and 17: The building system of claim 1, wherein the instructions cause the one or more processors to:
store the space graph in a data storage device, wherein: the plurality of nodes include a node representing the entity, wherein the entity is one of a person, place, or piece of equipment of the building (Paragraph 32); and the plurality of nodes include a second node representing the machine learning model that operates outside the space graph, wherein the machine learning model performs operations for the person, place, or piece of equipment of the building indicated by one or more edges of the plurality of edges relating the node with the second node (Paragraph 32; The reference mobile device 10 maps various installed devices of interest (or assets) such as fire protection and suppression devices (for example smoke detectors, sprinklers etc.), surveillance cameras, HVAC equipment and other building management system installed devices of interest with-in a “built environment.” A “built environment” is used to delineate a physical structure, such as a building that is built, i.e., constructed, and that has in place various installed devices of interest (assets). The asset identification process 40 (FIG. 2) efficiently maps the locations of these installed devices of interest (assets) and can be used to navigate to these installed devices of interest for purposes of service and maintenance).
As per claims 12 and 18: The building system of claim 11, wherein the instructions cause the one or more processors to:
identify the one or more edges relating the node to the second node to determine that the machine learning model performs the operations for the person, place, or piece of equipment; and cause the machine learning model to execute for the person, place, or piece of equipment (Paragraph 38; execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping; and integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment).
Relevant Prior Art References
The following prior art is cited as being of interest to the claimed invention but has not been applied in any of the current rejections.
Sheffield et al.- US Patent 10192115- the prior art teaches techniques for automatically generating a catalog of objects for a user which includes positional information.
Rosane et al.- US Patent 9342928 - the prior art teaches techniques for presenting building information.
Yun et al.- US Patent 9154919 - the prior art teaches techniques for providing localization of a mobile electronic device within an environment includes at least one server in communication with the mobile electronic device.
Knight et al.- US Patent 10798175 - the prior art teaches techniques for building automation, IoT (Internet of Things) devices, and hardware and software related thereto.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY D BROWN whose telephone number is (571)270-1472. The examiner can normally be reached 730-330pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached at 5712705440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY D BROWN/ Primary Examiner, Art Unit 2408