Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on June 12, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Aquino et. al. (US Patent 11,282,367 B1).
Regarding claim 1, Aquino et. al. discloses a method performed by an assistance platform (a processor system configured to receive video frames from a video stream provided by the plurality of video cameras, col 3, lines 1-4), the method comprising: analyzing video data, of an environment, using one or more machine learning models trained to recognize guests and characteristics of the guests (col 3, lines 10-15); recognizing, based on analyzing the video data, a plurality of guests located in the environment; recognizing, based on analyzing the video data, a plurality of characteristics exhibited by the plurality of guests (col 3, lines 30-40), wherein the plurality of characteristics affect the ability of at least one guest to respond to an emergent event (col 3, lines 55-60, where the event is construed as a fall event); determining, prior to detecting the emergent event, one or more characteristics of the plurality of characteristics that are to affect a mobility of a guest, of the plurality of guests, during the emergent event; detecting that the emergent event has occurred (col 4, lines 4-19); determining, based on the one or more characteristics, an action to be performed to assist the guest during the emergent event; and causing the action to be performed to assist the guest during the emergent event (col 12, lines 5-10, where an alert is sent to an authorized caregiver in multiple forms).
Regarding claim 15, the analysis of claim 1 of which is incorporated herein. Aquino et. al. also discloses a non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: analyze video data of an environment; recognize, based on analyzing the video data, an individual out of a plurality of objects in the environment; recognize, based on analyzing the video data, a plurality of characteristics exhibited by the individual, wherein at least one of the plurality of characteristics affect the ability of at least one guest to respond to an event; detect that the event has occurred in the environment; determine, based on the one or more characteristics, an action to be performed to assist the individual; and cause the action to be performed to assist the individual (claim 1, Figure 26; col. 25, lines 17-52).
PNG
media_image1.png
304
514
media_image1.png
Greyscale
PNG
media_image2.png
712
876
media_image2.png
Greyscale
Regarding claim 2 and 19, Aquino et. al. discloses the method of claim 1 and non-transitory computer-readable medium of claim 15, wherein determining the one or more characteristics comprises one or more of: determining, based on analyzing the video data, a characteristic indicating a mobility restriction of the guest; determining, based on analyzing the video data, a characteristic indicating an age of the guest; or determining, based on analyzing the video data, a characteristic indicating that the guest is carrying a load (col. 7, lines 22-36).
PNG
media_image3.png
272
526
media_image3.png
Greyscale
Regarding claims 3, 4, and 20, Aquino et. al. discloses the method of claim 1 and non-transitory computer-readable medium of claim 16, wherein determining the action comprises: determining, based on the one or more characteristics, equipment configured to assist guests with the one or more characteristics during the emergent event, and wherein causing the action to be performed comprises: causing the equipment and personnel to be dispatched to assist the guest during the emergent event (alert module 340, col 12, lines 1-14).
PNG
media_image4.png
330
510
media_image4.png
Greyscale
Regarding claim 5, Aquino et. al. discloses the method of claim 1, wherein determining the action comprises: generating, based on the one or more characteristics, information to assist guests with the one or more characteristics during the emergent event, and wherein causing the action to be performed comprises: causing the information to be provided to the guest to assist the guest during the emergent event (col 12, lines 34-45, where guests are the observers who are not caregivers, col 18, lines 15-24).
Regarding claim 6, Aquino et. al. discloses the method of claim 1, further comprising: storing, prior to detecting that the emergent event occurred, assistance information identifying the plurality of characteristics and identifying locations of a particular one of the guests of the plurality of guests, wherein the particular one guest exhibits the characteristics; retrieving the assistance information; and analyzing the assistance information to determine the one or more characteristics (col 12, lines 34-45, where guests are the observers who are not caregivers, col 18, lines 15-24, Figure 19).
Regarding claim 7, Aquino et. al. discloses the method of claim 6, further comprising: deleting the assistance information after causing the action to be performed (col 17, lines 55-60—following image processing, the processor system 220 deletes the raw video frame in order to ensure the privacy of persons under observation by the video monitoring system).
Regarding claim 8 and 16, Aquino et. al. discloses a system and non-transitory computer-readable medium, comprising: one or more camera devices configured to obtain video data of an environment; an assistance platform configured to: analyze the video data of the environment; recognize, based on analyzing the video data, an individual out of a plurality of objects in the environment; recognize, based on analyzing the video data (Figure 2, 25), a plurality of characteristics exhibited by the individual, wherein the recognized characteristics affect ability of the individual to respond to an event; store assistance information identifying the plurality of characteristics and identifying a location for the individual (Figure 24); detect whether the event has occurred in the environment; in response to detecting that the event has occurred, retrieve the assistance information; determine, based on the assistance information, an action to be performed to assist the individual; cause the action to be performed to assist the individual (Figure 18-20); and delete the assistance information after causing the action to be performed (col 17, lines 55-60—following image processing, the processor system 220 deletes the raw video frame in order to ensure the privacy of persons under observation by the video monitoring system).
Regarding claim 9, Aquino et. al. discloses the system of claim 8, wherein the individual is a first individual, wherein the plurality of characteristics are a first plurality of characteristics, and wherein the assistance platform is further configured to: recognize, based on analyzing the video data, a second individual in the environment; recognize, based on analyzing the video data, a second plurality of characteristics exhibited by the second individual; and update the assistance information to further identify the second plurality of characteristics (col 10, lines 36-41, facial recognition processor 315).
Regarding claim 10, Aquino et. al. discloses the system of claim 9, wherein the assistance platform is further configured to: provide the assistance information for display to provide assistance to the second individual (Figure 18, 20).
Regarding claim 11, Aquino et. al. discloses the system of claim 9, wherein the action is a first action, and wherein the assistance platform is further configured to: determine, based on one or more second characteristics of the second plurality of characteristics, a second action to be performed to assist the second individual; and cause the second action to be performed to assist the second individual (Col 12, lines 1-10).
Regarding claim 12, Aquino et. al. discloses the system of claim 11, wherein the one or more first characteristics are different than the one or more second characteristics, and wherein the first action is different than the second action based on the one or more first characteristics being different than the one or more second characteristics (col 11, lines 55-60, rulesets customized for different scenarios, person, and environments).
Regarding claim 13, Aquino et. al. discloses the system of claim 8, wherein the assistance platform is further configured to: determine a location of the individual, and wherein, to store the assistance information, the assistance platform is further configured to: store the assistance information, wherein the assistance information further identifies the location of the individual (col 11, lines 39-60, the data analyzer 325/context analyzer 330 of the processor system 220 cooperate to combine the information collected such as the locations and movement of the objects, the person’s identity).
Regarding claim 14 and claims 17-18, Aquino et. al. discloses the system of claim 8 and the non-transitory computer-readable medium of claim 16, wherein the assistance platform is further configured to: determine whether the event can no longer occur or that the individual has left the location where the event could occur; and in response to detecting that the event has not occurred the event can no longer occur or that the individual has left the location where the event could occur, delete the assistance information (col 17, lines 55-60—following image processing, the processor system 220 deletes the raw video frame in order to ensure the privacy of persons under observation by the video monitoring system).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA YIFANG LIN whose telephone number is (571)272-6435. The examiner can normally be reached M-F 7:00am-6:15pm, with optional day off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JESSICA YIFANG LIN/Examiner, Art Unit 2668 January 7, 2026
/VU LE/Supervisory Patent Examiner, Art Unit 2668