DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
All amendments as filed on 12/18/2025 have been entered and action follows:
Response to Arguments
Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1, 10 and 16 the phrase "such that" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Dependent claims are rejected as they depend on rejected independent claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4-23 as best understood are rejected under 35 U.S.C. 103 as being unpatentable over Khadloya et al (US Pub. 2019/0278976) in view of Darling et al (US 10,380,880) and Kirkby et al (US 9,009,805).
With respect to claim 4, Khadloya discloses A computer-implemented method for determining a
receiving image data that represents one or more objects, (see paragraph 0047, wherein …the video camera 102 is configured to receive or capture still images or video information…);
assigning a confidence score to each of the one or more objects based on comparing the one or more objects to the plurality of historical
in accordance with determining that the confidence score of each of the one or more objects is greater than a predetermined threshold (paragraph 0057, wherein …If an enrolled face is detected, then the face recognition engine can return …confidence level …If a face is detected but not determined to be part of the enrolled list of users “high confidence”…);
updating the local dataset to include the one or more objects as one or more historical processing, such as to determine whether a known face is detected or recognized. When a face is detected or recognized, the event information can be updated in server to reflect this as part of the event, such as using metadata associated with the video or other data associated with the event…; also see figure 3 for the communication between the local and remote server, numerical 318 and 320); and
in accordance with determining that at least one confidence score of the one or more objects is lower than the predetermined threshold, providing a security alert for display on one or more user devices, (see paragraph 0066, wherein …a facial recognition algorithm can be that …(iv) some individuals are determined to be present in the scene but the system 100 has a relatively low confidence “confidence score of the one or more objects is lower than the predetermined threshold” that at least one of the individuals corresponds to a recognized or enrolled individual. Information about …individuals, …communicated to a user…), as claimed.
However, Khadloya fails to explicitly disclose a false trigger event; and receiving, from a computing device, a central dataset representing a plurality of historical false trigger events aggregated from a plurality of different home security systems;
storing at least a portion of the central dataset to create a local dataset comprising at least a portion of the plurality of historical false trigger events, (emphasis added), as claimed.
Darling teaches a false trigger event, (emphasis added; see Abstract, filtering false alarm), as claimed.
It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the two references as they are analogous because they are solving similar problem of security using image analysis. Teaching of Darling to reduce the false alarms i.e. false triggers can be incorporated in to Khadloya system as suggested (see Khadloya paragraph 0051, wherein …capture or record video …can be configured to stream …to a user for viewing either locally or remotely), for suggestion, and modifying the system yields a system to determine the false alarms (see Darling col. 1, line 59-61), for motivation.
Kirkby teaches receiving, from a computing device, a central dataset representing a plurality of historical false trigger events aggregated from a plurality of different home security systems; storing at least a portion of the central dataset to create a local dataset comprising at least a portion of the plurality of historical false trigger events, (see figure 3, 100 communicating with 164 “central database”; col. 10, lines 32-40 wherein … smart home environment 100. In addition, in some implementations, the devices and services platform 300 communicates with and collects data from a plurality of smart home environments across the world. For example, the smart home provider server system 164 collects home data 302 from the devices of one or more smart home environments, where the devices may routinely transmit home data…), as claimed.
It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the references as they are analogous because they are solving similar problem of security using image analysis. Teaching of Kirkby to transmit information from the home security system to the central database can be incorporated in to Khadloya system as suggested (see Khadloya paragraph 0051, wherein …capture or record video …can be configured to stream …to a user for viewing either locally or remotely), for suggestion, and modifying the system yields a system for home monitoring (see col. 7, lines 24-27), for motivation.
With respect to claim 5, combination of Khadloya, Darling and Kirkby further discloses wherein in accordance with determining that at least one confidence score of the one or more objects is lower than the predetermined threshold, providing the security alert for display on one or more user devices further comprises:
receiving, from at least one of the one or more user devices, a user input identifying the one or more objects as a current false trigger event, (see Darling figure 2A, numerical 214, 216 and 220); updating the local dataset to include the one or more objects as one or more historical false trigger events; and transmitting the local dataset to the computing device, (see Khadloya paragraph 0055, wherein … In an example, in response to a trigger for an event, the local server 104 and/or the remote server 106 can be configured to receive image information from the camera 102 for concurrent or subsequent additional processing, such as to determine whether a known face is detected or recognized. When a face is detected or recognized, the event information can be updated in server to reflect this as part of the event, such as using metadata associated with the video or other data associated with the event…; also see figure 3 for the communication between the local and remote server, numerical 318 and 320), as claimed.
With respect to claim 6, combination of Khadloya, Darling and Kirkby further discloses wherein the security alert comprises at least a portion of the image data, (see Darling figure 2A, numerical 216 the image of the person “a portion of the image data”), as claimed.
With respect to claim 7, combination of Khadloya, Darling and Kirkby further discloses all the elements as claimed in claim 1 above. However, they fail to explicitly disclose the central dataset is distributed to a plurality of home security gateway devices, as claimed. But, it is well-known “official notice” in the art to have the data collaborated among the users (see figure 2 of US Pub. 2014/0280137). Therefore, it is obvious to one ordinary skilled in the art at the effective date of invention to simply use the conventional knowledge “official notice” to have the central data distributed to plurality of end users i.e. home security gateway devices as claimed.
With respect to claim 8, combination of Khadloya, Darling and Kirkby further discloses wherein the one or more user devices comprise at least one of a home security gateway device, a mobile phone, and a personal computer, (see Darling figure 2A, numerical 214 “a mobile phone”), as claimed.
With respect to claim 9, combination of Khadloya, Darling and Kirkby further discloses wherein the one or more objects are identified in the image data by an image analysis tool, (see Khadloya figure 2, Face recognition engine “an image analysis tool”), as claimed.
With respect to claim 21, combination of Khadloya, Darling and Kirkby further discloses wherein in accordance with determining that the image data represents a false trigger event, the method further comprises: generating training data for training the artificial intelligence model based on the false trigger event, the training data comprising the image data; training the artificial intelligence model using the training data; and transmitting the training data to a central database, (see Khadloya paragraph 0071, wherein …neural network “artificial intelligence model”…; and paragraph 0086, wherein …a training database including pre-loaded human face images for comparison to a received image (e.g., image information received using the camera 102) during a detection process. The training database can include …positive image clips for positive identification of objects as human faces and can include negative image clips “false trigger events” …), as claimed.
With respect to claim 22, combination of Khadloya, Darling and Kirkby further discloses wherein assigning the confidence score is performed by an artificial intelligence model, (see Khadloya paragraph 0071, wherein …neural network “artificial intelligence model”…; and paragraph 0081, wherein …the scoring unit 206 can be configured to compute a score for a candidate region…), as claimed.
With respect to claim 23, combination of Khadloya, Darling and Kirkby further discloses wherein the central database comprises training data from a plurality of computing devices, (see Khadloya paragraph 0077, wherein …neural networks …can be trained using various data sources. For example, specific training data that corresponds to an end application or end user can be used to train the model employed by the network. The models can be specific to different use cases or environments or can be more general…), as claimed.
Claims 10-14 are rejected for the same reasons as set forth in the rejection of claims 4, 5, 6, 7 and 9, because claims 10-14 are claiming subject matter of similar scope as claimed in claims 4, 5, 6, 7 and 9 respectively.
With respect to claim 15, combination of Khadloya, Darling and Kirkby further discloses wherein the home security edge device comprises at least one of a motion detector, a camera, and a microphone, (see Khadloya paragraph 0063, for audio “microphone”; figure 1, numerical 102 “camera”; and paragraph 0017, wherein …motion event detected using information from the same one or more cameras or from a different motion sensor), as claimed.
Claims 16-20 are rejected for the same reasons as set forth in the rejection of claims 4, 5, 8, 6 and 9, because claims 16-20 are claiming subject matter of similar scope as claimed in claims 4, 5, 8, 6 and 9 respectively.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIKKRAM BALI/Primary Examiner, Art Unit 2663