DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Status
In the amendment filed on December 8, 2025, claims 11-20 have been cancelled, claims 21-30 have been newly added. Therefore, claims 1-10 and 21-30 are currently pending for examination.
Applicant’s election without traverse of Group I in the reply filed on 12/08/2025 is acknowledged.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 6 and 26 recite “the one or more notification parameters” without proper antecedent basis in the claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6, 9, 10, 21-24, 26, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (Li: US 2017/0206761) in view of Mullins (US 20240144792 A1) further in view of Anderson ( US 20130182107 A1).
Regarding Claim 21, Li teaches a system (Fig. 1, 100) comprising:
a data capture device (Fig. 1, camera 110) configured to capture one or more images of a subject and audio of the subject (Par 58, image feature(s) includes human bodies, human faces, pets, things, etc. and Par 65, speech, crying, scream, sound caused by an animal ); and
a computing device (Fig. 1, computing device 120) configured to:
acquire, via the data capture device, the one or more images of the subject and the audio of the subject (Fig. 2, step 202, extract video frames and step 214, extract audio signals),
determine, based on the one or more images of the subject, that a motion threshold has been exceeded (Par 60, processor 121 determines a difference between a video frame and its preceding (or subsequent) video frame by, for example, comparing pixel values of the video frame and the preceding (or subsequent) video frame. If the difference is equal to or exceeds a threshold,),
determine, based on the audio of the subject, that an audio threshold has been exceeded (Par 64, processor 121 determines a change in sound level of the audio signal. If the change is equal to or greater than a threshold, processor 121 identifies the change as a sound feature); and
based on the motion threshold being exceeded and the audio threshold being exceeded, send, via a local area network (Par 27, local wireless network), a notification to one or more user devices (Par 12, identifying a special event based on analysis of video frame(s) and/or audio signal and Par [41-42] and Par [0043] At 212, processor 121 transmits the video, video preview frames, and/or the information relating to the detected special event(s) (if any) to user device 140 via network 130.).
Li does not explicitly disclose exceeding a motion threshold is for a first length of time or exceeding the audio threshold is for the second length of time.
However the preceding limitation is known in the art of monitoring systems. Mullins teaches audio monitoring device (abstract and Par 7) having exceeding the audio threshold is for the second length of time (Par 8, activation if levels are greater than 70 dB, sustained for five seconds or longer and Par 19).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Mullins in order to make the system less or more sensitive (Mullins: Par 19).
Li or Mullins does not explicitly disclose exceeding a motion threshold is for a first length of time.
However the preceding limitation is known in the art of monitoring systems. Anderson teaches an activity monitoring system that detects events based on video processing of a substantially live sequence of images from a video camera (Fig. 3, 13, 14 and abstract) and a microphone (Par 44) and further teaches determining that a motion threshold has been exceeded for a first length of time (Par 74, The motion alert integrator 36 may suppress Motion Alarm Signal 39 for a time period 107 after Motion Level Value 35-b exceeds the Motion Threshold 82-b.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Anderson in order to allow customization of event detection and/or behavior upon detecting events (Anderson: Par 8).
Regarding Claim 1, the claimed steps of the method correspond with the elements of the device as addressed in claim 21. Since the device has been made obvious, the steps of using the device in its intended manner are also made obvious.
Regarding Claim 22, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the data capture device comprises an image sensor (Li: Fig. 1, camera 110) and a microphone (Anderson: Fig. 3, 14) and the one or more user devices comprise one or more of a smartwatch (Li: Par 29, user device 140 may be a smart phone, a tablet, a personal computer, a wearable device (e.g., Google Glass™ or smart watches), a haptic device, an audio/video monitor, a smartphone, a computer, a television, a set-top box, or a streaming device.
Claim 2 is rejected for the similar reasons for claim 22 above.
Regarding Claim 23, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the motion threshold comprises an amount of motion (Li: Par 60) and the audio threshold comprises a decibel level (Li: Par 64, sound level is dB).
Claim 3 is rejected for the similar reasons for claim 23 above.
Regarding Claim 24, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the first length of time comprises a user- programmable length of time (Anderson: Par 74, The time period 107 for suppression of motion events in the Primary Region 101 may be selectable.) and the second length of time comprises a user- programmable length of time (Mullins: , the user can increase or decrease the threshold from 5 seconds in increments of 1 sec up to a maximum of 90+seconds).
Claim 4 is rejected for the similar reasons for claim 24 above.
Regarding Claim 26, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the one or more notification parameters comprise one or more of a type of sound in the audio (Li: Par 38, baby crying, glass shattering and see also Par 65), motion associated with the audio, a type of motion, facial recognition, a light level, a quality of image, a detection zone, an ignore zone, a day of week, a time of day, a location of one or more users, an application setting, or an amount of time since a previous notification.
Claim 6 is rejected for the similar reasons for claim 26 above.
Regarding Claim 29, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the notification sent to the one or more user devices causes the one or more user devices to one or more of: exit a standby mode, output the one or more images (Par [0043] At 212, processor 121 transmits the video, video preview frames, and/or the information relating to the detected special event(s) (if any) to user device 140 via network 130 and Par 52, user device 140 presents to the user the received video, sample videos, video preview frames (or thumbnail images thereof), and/or information relating to the special event(s) in a UI.), output the audio, or emit a haptic output.
Claim 9 is rejected for the similar reasons for claim 29 above.
Regarding Claim 30, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the system further comprises a remote computing device (Li: Par 23, computing device 120), wherein the data capture device is further configured to establish a communication session with the remote computing device (Li: Fig. 1, Camera 110 communicates with Computing device 120 via network 130);
wherein the remote computing device is configured to:
receive, via the communication session, the one or more images and the audio (Li: Par 20, camera 110 may be configured to transmit a stream video to computing device 120 and Par 38);
output one or more images and the audio (Li: par [0028] User device 140 is configured to receive data (e.g., image and/or video data) from camera 110 and/or computing device 120 via network 130.);
receive an input indicative of an alert event (Li: Par [0022] Computing device 120 is configured to analyze the video received from camera 110 and Par 38); and
send a notification indicative of the alert event (Li: Par [0043] At 212, processor 121 transmits the video, video preview frames, and/or the information relating to the detected special event(s) (if any) to user device 140 via network 130).
Claim 10 is rejected for the similar reasons for claim 20 above.
Claims 5, 7, 25 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Mullins and Anderson further in view of Williams et al. (Williams: US 10825318 B1).
Regarding Claim 25, the combination of Li, Mullins and Anderson teaches the system of claim 21, but does not explicitly disclose wherein the computing device is further configured to:
receive an indication from the one or more user devices that the notification is indicative of an alert event; determine, based on the alert event, one or more notification parameters associated with the alert event; and train, based on the one or more notification parameters, a predictive model configured for predicting a likelihood that an alert event is occurring.
However, the preceding limitation is known in the art of monitoring devices. Williams teaches a system for identifying a condition/event based on sensor data to send a notification indicating the condition/event (abstract) and further teaches receive an indication from the one or more user devices that the notification is indicative of an alert event; determine, based on the alert event, one or more notification parameters associated with the alert event; and train, based on the one or more notification parameters, a predictive model configured for predicting a likelihood that an alert event is occurring (Col. 12 lines 52-65; Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs. Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as mobile device, and Col. 13 lines 53-60 ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Williams in order to make valid and reliable predictions (Williams: Col. 12 lines 52-65 ).
Claim 5 is rejected for the similar reasons for claim 25 above.
Regarding Claim 27, the combination of Li, Mullins and Anderson teaches the system of claim 21, wherein the computing device is further configured to:
determine, based on one or more of the one or more images or the audio, one or more values of one or more notification parameters; provide the one or more values of the one or more notification parameters to determine a likelihood that an alert event is occurring (Li: Par 42, when determining the score, processor 121 gives a different weight to special events detected based on the video frames than to those detected based on the audio signal.); and
receive the likelihood that an alert event is occurring, wherein the computing device is further configured to send, via the local area network, the notification to the one or more user devices based on the likelihood that an alert event is occurring exceeding an event threshold (Li: Par 42, processor 121 determines a score of cross-referencing two detected special events around the same time that are detected separately by analyzing the video frames and the audio signal. If the determined score equals to or exceeds a threshold, processor 121 counts the events as a single special event and performs step 210 as described. On the other hand, if the score is less than the threshold, processor 121 does not recognize them as a special event. In doing so, a false event may be prevented from being recorded and Par 43).
The combination does not explicitly disclose a predictive model configured for predicting the likelihood of the alert event.
However, the preceding limitation is known in the art of monitoring devices. Williams teaches a system for identifying a condition/event based on sensor data to send a notification indicating the condition/event (abstract) and further teaches using a predictive model configured for predicting the likelihood of the alert event (Col. 13 lines 53-60; The identified abnormalities or anomalies in the historical data and their corresponding conditions may comprise a predictive model to be used to analyze current sensor data. For example, the model may include a prediction of a condition associated with an individual in the home environment based upon certain abnormal or anomalous patterns in current sensor data.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide a predictive model configured for predicting the likelihood of the alert event in order to accurately predict the correct output (Williams: Col. 13 lines 40-45).
Claim 7 is rejected for the similar reasons for claim 27 above.
Claims 8 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Mullins and Anderson further in view of Moses (US 9460596).
Regarding Claim 28, the combination of Li, Mullins and Anderson teaches the system of claim 21, but does not explicitly disclose wherein the computing device is further configured to: determine that the data capture device is not in communication with the local area network; and establish a direct communication link with the one or more user devices.
However, the preceding limitation is known in the art of monitoring systems. Moses teaches a monitoring system configured to: determine that the device is not in communication with the local area network; and establish a direct communication link with the one or more user devices (col. 11 lines 30-42, if the wireless router or wifi network experiences problems or an outage, the system may then use the cellular data network to continue sending alerts and receiving user instructions. ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Moses in order to enable the user to take appropriate action (Col. 11 lines 40-42).
Claim 8 is rejected for the similar reasons for claim 28 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Prior arts cited for the record but not used in Office Action, are listed in attached PTO-892.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nay Tun whose telephone number is (571)270-7939. The examiner can normally be reached on Mon-Thurs from 9:00-5:00. If attempts to reach the examiner by telephone are unsuccessful, the examiner's Supervisor, Steven Lim can be reached on (571) 270-1210. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Nay Tun/Primary Examiner, Art Unit 2688