DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 26, 2026 has been entered.
Response to Amendment
Claims 78-91, 94-96, and 98 are pending in the application. Claims 78-79, and 96 are currently amended. Claims 1-77, 92-93, and 97 have been canceled. Claim 98 is new.
Response to Arguments
With regard to Applicant’s remarks dated February 26, 2026:
Regarding the rejection of claims 78-91 and 94-97 under 35 U.S.C. 102(a)(2), Applicant’s amendment and arguments have been fully considered. Applicants argue that “Tusch never discusses a user-specific movement. for example, gait. The sections of Tusch cited against this element when it was in claim 79 discuss the user's trajectory, that is, where she is headed, rather than her gait which is how she walks. Further, those references of Tusch, unlike the current claims, are not directed toward determining an identity of a user but rather toward deciphering her possible intentions”. Examiner agrees to the extent that Tusch is relying on user-specific movements to decipher user’s intention, but does not teach identifying the user solely based on the unique user’s movement such as gait. Therefore, the rejection has been withdrawn. However, new grounds of rejection are made in view of the newly discovered references.
As to any arguments not specifically addressed, they are the same as those discussed above.
Claim Objections
Claims 78 and 96 are objected to because of the following informalities: in the newly added limitation the word “serious” was likely intended to say “series”.
Appropriate correction or explanation is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 78-91, 94-96, and 98 are rejected under 35 U.S.C. 103 as being unpatentable over Tusch et al. (US 2021/0279475 A1) in view of Hwang et al. (US 2022/0210341 A1).
As to claim 78, Tusch teaches a method for external behavior recognition for control of an environment of a facility [smart home automation] (par. [0118]-[0121], [0391]), the method comprising:
training a behavior learning module with unique external behavior exhibited by known users [training ART platform by creating person’s own digital avatar that is programmed with a specific person’s gesture but not someone else’s to be recognized in operation] (par. [0315]-[0316], [0391], [0442], [1010]);
capturing, with an imaging system of the facility, a plurality of successive images of a user of the facility (Figs. 9, 11, 18, 60, par. [0344], [0349], [0431]);
obtaining, from the plurality of successive images, external behavior data of the user, wherein the external behavior data is representative of one or more physical actions taken by the user during the capturing of the plurality of successive images [people trajectory, pose, gesture, identity generation] (Fig. 18, par. [0289], [0356], [0357]);
determining an identity of the user as one of the known users based at least in part on the external behavior data of the user by the behavior learning module trained with the unique external behavior exhibited by the known users [identity+gesture=control. Performing identification of a known user and what they are trying to do that is specific to that known user] (par. [0421]-[0422], [0441]-[0442]); and
implementing environment customizations associated with the identity of the user [creating ART events such as customized temperature control for the detected individual or allowing an adult to turn on/off smoke alarm, but not kids] (Fig. 43, par. [0421]-[0422], [0738]).
Tusch fails to expressly teach that the unique external behavior exhibited by the known users includes a user-specific movement pattern determined from a series of physical actions performed by each respective known user.
Hwang is directed to performing a user-specific customization associated with the user identifier based on determining multiple biometric characteristics of the person and associating the characteristics with a user identifier unique to the person (abstract). In particular, Hwang teaches a unique external behavior exhibited by the known users includes a user-specific movement pattern determined from a series of physical actions performed by each respective known user [typical gaits, poses, gestures, and other movements that can be used to identify particular persons in order to apply appropriate customizations] (par. [0006], [0022], [0040], [0047], [0095]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Tusch by having the unique external behavior exhibited by the known users includes a user-specific movement pattern determined from a series of physical actions performed by each respective known user in order to recognize and track the person within the environment based on biometric characteristics other than a facial recognition result (par. [0040] in Hwang).
As to claim 79, Tusch in view of Hwang teaches that the user-specific movement pattern comprises a gait (par. [0022], [0040], [0047], [0095] in Hwang).
As to claim 80, Tusch teaches that the imaging system comprises a camera, an infrared (IR) camera, a lidar sensor, or an imaging radar system (Fig. 7).
As to claim 81, Tusch teaches that obtaining the external behavior data of the user comprises extracting a respective pose of the user from each image of the plurality of successive images [extracting a pose of the individual] (par. [0413], [0431]).
As to claim 82, Tusch teaches that determining the identity of the user based at least in part on the external behavior data of the user comprises determining a unique identifier associated with the user [unique identity] (par. [0093]).
As to claim 83, Tusch teaches that implementing environment customizations comprises controlling an environmental aspect using one or more building systems (Fig. 43, par. [0738]).
As to claim 84, Tusch teaches that the one or more building systems comprise a tintable window [non-functional descriptive material. It is well known for a home to have windows that can be tinted. The claim does not require any action with respect to a window] (par. [0478]).
As to claim 85, Tusch teaches that implementing the environment customizations comprises adjusting a temperature, window tint, and/or lighting within the facility (Fig. 43, par. [0738]).
As to claim 86, Tusch teaches that capturing the plurality of successive images of the user is responsive to a triggering event [event trigger basis] (par. [0945]).
As to claim 87, Tusch teaches that the triggering event comprises detection of the user at a location of the facility (par. [0945]).
As to claim 88, Tusch teaches that determining the identity of the user is further based on sensor information regarding the user [face or iris recognition using a sensor] (par. [0346], [1022]).
As to claim 89, Tusch teaches that the sensor information comprises information indicative of a sound made by the user, dimensions of the user, and/or biometric information regarding the user [face or iris recognition using a sensor] (par. [0346], [1022]).
As to claim 90, Tusch teaches that determining the identity of the user is further based on one or more device inputs received from the user [voice recognition] (par. [1018]-[1020]).
As to claim 91, Tusch teaches that the one or more device inputs comprise a temperature setting, a window tint setting, and/or a lighting setting [voice control system within the home] (par. [0186], [0194]).
As to claim 94, Tusch teaches that training the behavior learning module comprises using previously-obtained sets of images of the known users taking the one or more physical actions as a positive dataset [the system is trained using a large dataset of normal behavior of users and uses it to flag abnormal behavior in operation] (par. [1014], [1035]).
As to claim 95, Tusch teaches that the previously-obtained sets of images are obtained by the imaging system (par. [1011]-[1013]).
As to claim 96, Tusch in view of Hwang teaches an apparatus for external behavior recognition for control of an environment of a facility [computer-vision system] (par. [1057] in Tusch), the apparatus comprising one or more controllers comprising circuitry (par. [0323]-[0325] in Tusch), which one or more controllers are configured to perform the method steps as discussed per claim 78, above.
As to claim 98, Tusch in view of Hwang teaches all the elements as discussed per corresponding method claim 79, above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLEG SURVILLO whose telephone number is (571)272-9691. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ario Etienne can be reached at 571-272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLEG SURVILLO/Primary Examiner, Art Unit 2457