Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 5 and 19 stands cancelled. Claim 21 is newly added. Claims 1, 8, 15-18 and 20 are currently amended. Claims 1-4, 6-18 and 20-21 are pending.
Response to Arguments
Applicant’s arguments, see Remarks, filed November 13, 2025, with respect to the rejection(s) of claim(s) 5 and 19 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Camera-Based Log System for Human Physical Distance Tracking in Classroom to Deepaisarn et al.
In regards to the 35 USC § 101 rejection, applicant has amended claim language; therefore, the rejection is overcome.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 6-18 and 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0117693 A1 to Martin et al., hereinafter, “Martin” in view of Camera-Based Log System for Human Physical Distance Tracking in Classroom to Deepaisarn et al., hereinafter, “Deepaisarn”.
Claim 1. A method for indicating occupancy of a point of interest in a room, comprising: Martin [0079] FIG. 4 is a flow chart showing a method of operation of the occupancy report module 118. The method shows how the module 118 generates occupancy information, and then populates the occupancy table 107 with the occupancy information.
receiving a video feed from a camera located in the room; Martin [0080] In step 402, the occupancy report module 118 receives image data 74 from one or more surveillance cameras 124, 126, 128, 130.
detecting, using an artificial intelligence (AI) model, a person in the video feed; Martin [0049] The image analytics module 108 tracks locations and movements of individuals. For this purpose, the image analytics module 108 analyzes the image data such as by maintaining background model and then using that model to track foreground objects such as individuals. It then generates bounding boxes to track the individuals as they move across the image data of the scene of each camera.
Martin [0050] The image analytics module 108 also identifies objects within the image data of each camera, relative to a background model of the scene. These objects include static elements within the scene such as doors/doorways, chairs, tables, and desks, in examples. The image analytics module 108 may also associate metadata to moving objects (e.g., people), numbers of moving objects, and specific users, to list a few examples.
Martin [0068] It can also be appreciated that the system 100 can incorporate deep learning capabilities. In one embodiment, the VMS 116 includes a deep learning application/module that can identify and classify the building resources. The deep learning application executes on the CPU 180 in a similar fashion as the modules 108/118. The deep learning application can either augment the manual configuration step of defining regions of interest 90 around/relative to building resources in image data of the scene, or possibly eliminate this manual configuration step. In this way, the occupancy report module 118 can generate occupancy information based upon movement and location of individuals relative to building resources identified within the scene by the deep learning application.
identifying a location of the person on a floor plan of the room based on a video feed location of the person detected in the video feed; Martin [0083] For each camera, in step 410, the module 118 identifies intersections between the bounding boxes 28 for individuals 30 and the regions of interest 90 for the objects and areas, to obtain time-stamped occupancy information for each camera. In step 412, the module 118 updates the occupancy table 107 with per-camera time-stamped occupancy information. In examples, the occupancy information includes: people counts for each room and area of the room (e.g. work area), and indications as to whether objects such as chairs, desks, and other building resources are occupied or unoccupied, in examples.
Martin [0069] FIG. 2 is a representation of image data of a scene captured by a surveillance camera in FIG. 1…The scene is of office 25 in room 8.
and where the location of the person on the floor plan is at the point of interest for at least a threshold period of time, indicating the point of interest as being occupied. Martin [0083] For each camera, in step 410, the module 118 identifies intersections between the bounding boxes 28 for individuals 30 and the regions of interest 90 for the objects and areas, to obtain time-stamped occupancy information for each camera. In step 412, the module 118 updates the occupancy table 107 with per-camera time-stamped occupancy information. In examples, the occupancy information includes: people counts for each room and area of the room (e.g. work area), and indications as to whether objects such as chairs, desks, and other building resources are occupied or unoccupied, in examples.
Martin [0062] The image analytics module 108 also creates alerts based on the absence of motion (or presence) in the image data from the cameras. This image data is then analyzed by the occupancy report module 118 to determine if a building resource such as a desk, table, seat, room 8,9 or area (e.g. work area 60, office 25) has been unused for a period of time, in one example.
Martin [0092] Such a representation of data enables the following occupancy-related information to be calculated/generated for each building resource. In one example, for each time-stamped frame of image data 74, the occupancy report module 118 can calculate the count of people (e.g. the “peopleCount” field) that are occupying/using each object or building resource at the time indicated by the timestamp. In another example, the module 118 can also provide an indication of whether each building resource is occupied or unoccupied at the time indicated by the timestamp (e.g. the “isOccupied” field). This information can then be aggregated over time to spot trends in utlization/lack of utilization of each building resource.
Martin fails to explicitly teach wherein the point of interest is a seat in the room, and wherein identifying the location of the person is based at least in part on assuming a height of a person when seated. Deepaisarn, in the field of analyzing position and location of persons in a space in image data using a neural network, teaches wherein the point of interest is a seat in the room, and wherein identifying the location of the person is based at least in part on assuming a height of a person when seated. Deepaisarn FIG. 3 (c) [B. Person Detection] teaches a seat and the height when the person is sitting.
Martin used deep learning to determine occupancy to make changes to building resources controls and Deepaisarn teaches using neural network to determine position and location (occupancy) of a person, to make recommendations of a building space. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Martin with the teachings of Deepaisarn [Introduction] to store the data in a more efficient method in terms of storage space, privacy concern and physical distancing protocol comparing to directly storing raw footage video data.
Claim 2. Martin further teaches further comprising performing a calibration of video feed locations from the video feed of the camera to locations on the floor plan based at least in part on multiple points from the video feed indicated as corresponding to multiple corresponding points on the floor plan. [0069] FIG. 2 is a representation of image data of a scene captured by a surveillance camera in FIG. 1. The figure is an in-memory representation of what camera4 130 in FIG. 1 “sees” in its field of view and the image analytics applied by the analytics module 108. The scene is of office 25 in room 8.
[0070] A representative frame of image data 74 from camera 130 is shown. In one implementation, objects such as work areas 60, chairs 50 and desks 42 are identified within the scene by the image analytics module 108.
[0071] In another implementation, objects such as work areas 60, chairs 50 and desks 42 are identified to analytics module 108 as part of an initial configuration step or process. For example, an operator of the system 100 draws regions of interest 90 around objects (specifically, around building resources) and categorizes them for the analytics module 108). The operator draws the regions of interest 90 so that the analytics module can then track foreground objects, such as individuals relative to these objects. This information is then sent to the occupancy report module 118.
[0072] In the illustrated example, regions of interest 90-1 through 90-5 are drawn around building resources such as chairs 50-1 through 50-5, and regions of interest 90-6 and 90-7 are drawn around desk building resources such 60-1 and 60-2. Region of interest 90-8 is also drawn around the entirety of the scene.
Claim 3. Martin further teaches further comprising receiving, from a user interface, an indication of a mapping between the multiple points from the video feed and the multiple corresponding points on the floor plan. [0071] In another implementation, objects such as work areas 60, chairs 50 and desks 42 are identified to analytics module 108 as part of an initial configuration step or process. For example, an operator of the system 100 draws regions of interest 90 around objects (specifically, around building resources) and categorizes them for the analytics module 108). The operator draws the regions of interest 90 so that the analytics module can then track foreground objects, such as individuals relative to these objects. This information is then sent to the occupancy report module 118.
Claim 4. Martin further teaches wherein indicating the point of interest as being occupied includes displaying, on a user interface, a representation of the floor plan and a visual indication that the point of interest is being occupied. [0020] The system might also include an access control system or other building management system and a display. The access control system receives the motion maps and dwell maps from the occupancy report module. The display displays the motion maps and/or dwell maps, possibly for guiding people to unused building resources.
[0106] In step 902, the access control system 120 receives motion maps 21 and dwell maps 31 over the network 23 from the occupancy report module 118. In step 904, the access control system 120 sends the motion maps 21 and dwell maps 31 to displays 117 installed near access points 112 of rooms. According to step 906, at each display installed near access points 112 to rooms 8/9, the motion maps 21 and/or dwell maps 31 are displayed to guide individuals to open/unoccupied rooms 8/9, chairs 50, desks 42, or other resources in a ‘hot desking’ environment.
Claim 6. Martin further teaches further comprising computing, based on indicating multiple points of interest in the room as being occupied, an occupancy metric for the room. [0101] FIG. 8 is a flow chart showing another method of operation for the occupancy report module 118. This method shows how occupancy information 20 collected over time (such as over days, weeks, or months) can spot utilization trends within rooms. In one example, reports that include these utilization trends can then be sent to the building management control system 110 to program the control system 110. Motion maps 21 and dwell maps 31 can also be sent to the ACS 120.
Claim 7. Martin further teaches further comprising controlling an automated system for the room based on the occupancy metric. [0017] The proposed system can also be used to control a building automation control system, based upon occupancy information collected over time. The occupancy information collected over time includes utilization of rooms, and the building automation control system sends signals in accordance with the room utilization to a heating/ventilation/air conditioning system (HVAC).
Claim 8. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale.
Claim 9. Reviewed and analyzed in the same way as claim 2. See the above analysis and rationale.
Claim 10. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale.
Claim 11. Reviewed and analyzed in the same way as claim 4. See the above analysis and rationale.
Claim 12. Reviewed and analyzed in the same way as claim 5. See the above analysis and rationale.
Claim 13. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale.
Claim 14. Reviewed and analyzed in the same way as claim 7. See the above analysis and rationale.
Claim 15. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale.
Claim 16. Reviewed and analyzed in the same way as claim 2. See the above analysis and rationale.
Claim 17. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale.
Claim 18. Reviewed and analyzed in the same way as claim 4. See the above analysis and rationale.
Claim 20. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale.
Claim 21. Reviewed and analyzed in the same way as claim 7. See the above analysis and rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DELOMIA L GILLIARD whose telephone number is (571)272-1681. The examiner can normally be reached 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DELOMIA L GILLIARD/Primary Examiner, Art Unit 2661