Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/08/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 3-4, 6-8, 11-14, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. (US 20150087258 A1) in view of Schmouker et al. (US 20180368684 A1) and further in view of Freeman et al. (US 20020005895 A1).
Regarding claim 1, Barnes et al. teaches a system comprising a processor and a memory that stores computer-executable instructions that, when executed by the processor, cause the processor to perform operations (see para [0015]; “The system can include a processor and a memory. The memory can store computer-executable instructions that, when executed by the processor, cause the processor to perform operations”) comprising receiving captured data from a user device and information from a data source, the data source comprising a server operating on a network, wherein the captured data identifies an activity occurring at the user device (see para [0007]; “The server computer can detect an event associated with the user device. In some instances, the server computer can receive data from the user device and determine, based upon the data, that the event has occurred”), (see para [0007]; “The server computer can detect an event associated with the user device”): detecting, based on analyzing the captured data, the user model, and the information from the data source, a monitoring trigger that indicates that monitoring of the user device is to be initiated (see para [0076]; “The server computer can detect an event associated with the user device… Upon detecting the event, the server computer can determine if monitoring of the user device is to be initiated”, see also para [0076]; “the server computer 112 can generate a monitoring alert 142 and transmit the monitoring alert 142 to the user device 102. As explained above, the monitoring alert 142 can be used to trigger a prompt at the user device 102 and/or to otherwise obtain input for specifying if monitoring is to be initiate”); triggering the monitoring of the user device for the time period (see para [0010]; “the server computer can issue one or more commands for initiating the monitoring”, see para [0068]; “the monitoring service 110 may generate a monitoring alert 142 to trigger prompting of the user regarding monitoring”), wherein the monitoring comprises obtaining, via a network connection between the user device and an edge device operating on the network, video associated with the user device, wherein the video is streamed to the edge device via the network connection for the time period, and wherein the video captures a portion of an area around the user device for the time period (see para [0011]; “the method can include identifying, by the server computer, monitoring hardware that is to monitor the proximity of the user device, and issuing, by the server computer, a command to the monitoring hardware to initiate monitoring of the proximity of the user device by the monitoring hardware”, see also para [0054]; “the monitoring hardware 132 can include various devices associated with the monitored location 134 such as video devices such as cameras”, and para [0056]; “that the monitoring data 136 can include data or other information generated or sensed by the monitoring hardware 132. Thus, the monitoring data 136 can include, for example, digital and/or analog video signals; digital and/or analog audio signals; photograph and/or other image data; streaming video, audio, or image data”, Note: the monitoring hardware is in the device’s proximity i.e., an “edge” endpoint on the network); analyzing, during the time period, the video to determine if a threat is detected, wherein the video is analyzed by the edge device (see para [0010]; “The server computer or other entities (e.g., devices or technicians at a monitoring center) can analyze the monitoring data to determine if any action is to be taken”, see also para [0086]-[0092]; Analyzing the monitoring data to identify suspicious activity, and para [0054]; “the monitoring hardware 132 can include various devices associated with the monitored location 134 such as video devices such as cameras, closed circuit television ("CCTV") devices, and/or other video devices”, Note: the “edge device” aspect is an obvious architectural placement of the monitoring hardware); and if a determination is made that the threat is detected, identifying, by the edge device, another device in proximity to the user device, and triggering delivery of an alert to the other device (see para [0015]; “determining a geographic location of the user device, identifying monitoring hardware to monitor the proximity of the user device, and issuing, by the server computer, a command to the monitoring hardware to initiate monitoring of the proximity of the user device by the monitoring hardware”, see also para [0087]; “The analysis of the monitoring data 136 can include various types of analysis. For example, the server computer 112 and/or other devices or systems can analyze the monitoring data 136 to identify suspect movements. For example, the server computer 112 or other entities can execute algorithms that search detected movements for recognized patterns that indicate suspect or even criminal activity”). However, Barnes et al. does not teach obtaining a user model that models behavior associated with the user device, wherein the user model defines, for a user of the user device, times associated with activities performed with the user device, wherein the activities comprise the activity, wherein the times comprise a time period that corresponds to a duration the activity lasts when performed with the user device, and wherein the duration corresponds to a time the activity previously took when performed with the user device; identifying, based on the activity and the time the activity previously took when performed at the user device, if a determination is made that the threat is not detected during the time period, triggering termination of the monitoring and deletion of the video by the edge device.
In the same field of endeavor, Schmouker et al. teaches obtaining a user model that models behavior associated with the user device (see claim 15; “A system for monitoring behavior patterns, comprising: a processor configured for monitoring a user activity; a storage location for logging user activity.. said processor logging said activity information in said user profile” Note; it stores and models behavior patterns), wherein the user model defines, for a user of the user device, times associated with activities performed with the user device (see para [0006]; “receiving information about activities of a user and calculating duration of each activity”, see also para [0046]; “the duration of each data can then be evaluated within each day in this example using duration=ending timestamp−starting timestamp”), wherein the activities comprise the activity, wherein the times comprise a time period that corresponds to a duration the activity lasts when performed with the user device (see para [0006]; “calculating duration of each activity. Each activity that exceeds a threshold duration is categorized and labelled accordingly”), and wherein the duration corresponds to a time the activity previously took when performed with the user device (see claim 15; “said processor analyzing said logged activity information and duration such that said activities are categorized accordingly based on their type and duration”); identifying, based on the activity and the time the activity previously took when performed at the user device, a monitoring time comprising the time period (see para [0006]; “receiving information about activities of a user and calculating duration of each activity”, see also para [0015]; “Patterns of behavior can be established over time in a number of ways”, see also para [0023]; “For example when time stamps are used, duration of the activity is calculated as shown in FIG. 2” and para [0028]; “data items that are logged are associated with a user. As discussed in FIG. 2, these can be timed and labelled”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. in order to generates alerts based on previously recognized behavioral patterns (see para [0006]).
However, Barnes and Schmouker et al. does not teach if a determination is made that the threat is not detected during the time period, triggering termination of the monitoring and deletion of the video by the edge device.
In the same field of endeavor Freeman et al. teaches if a determination is made that the threat is not detected during the time period, triggering termination of the monitoring and deletion of the video by the edge device (see para [0043]; “Following the recording of the predetermined number of additional frames, the video recording devices ceases to record further frame data”, see also para [0071]; “following the trigger event, and after termination of recording in the second buffer pool, frames…would be preserved within buffers”, and para [0048]; “Upon activation of the purge button, the contents of the circular buffer and any still images that have been captured are erased. Thus, in the event that a user captures images which, for any reason, such user does not desire to retain, they may be erased” Note: disclose termination after trigger-window capture and deletion/erasure capability. This maps to ending monitoring and deleting video at the edge device). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. and compact video image recording device for recording video images before and after a triggering event of Freeman et al. in order to generate monitor analyze stop deletion loop (see para [0043]).
Regarding claim 3, the rejection of claim 1 is incorporated herein.
Schmouker et al. in the combination further teach wherein the user model is obtained from the edge device (see para [0011]; “the represented blocks are purely functional entities, which do not necessarily correspond to physically separate entities. Namely, they could be developed in the form of software, hardware, or be implemented in one or several integrated circuits, comprising one or more processor”), wherein the edge device stores the user model, and wherein the user model is generated by capturing data during monitoring of the activities being performed at the user device (see claim 15; “A system for monitoring behavior patterns, comprising: a processor configured for monitoring a user activity; a storage location for logging user activity.. said processor logging said activity information in said user profile” see also para [0015]; “Patterns of behavior can be established over time in a number of ways. FIG. 1, is an example of some of the ways where information about behavior patterns can be collected. The examples provided in FIG. 1 only address a few ways of collecting information, with the understanding that many other methods are available as can be appreciated by those skilled in the art. In the example used in FIG. 1, the few methods shown include gathering of information through user input 110, actuation 120, sensors 130, and on-line resources such as social media 140. Information may already exist and previously collected and stored in a user profile 150”, Note; it stores and models behavior patterns).
Regarding claim 4, the rejection of claim 1 is incorporated herein.
Barnes et al. in the combination further teach wherein the data source comprises a social networking device (see para [0050]; “social networking information associated with a user of the user device 102”), and wherein the information comprises an indication from a social networking user that the threat exists at the user device (see para [0008]; “By prompting users when particular events are detected, the monitoring service can offer proactive monitoring before a user even realizes that a threat or emergency situation exists”).
Regarding claim 6, the rejection of claim 1 is incorporated herein.
Barnes et al. in the combination further teach wherein the user model is obtained from the edge device (see para [0011]; “the represented blocks are purely functional entities, which do not necessarily correspond to physically separate entities. Namely, they could be developed in the form of software, hardware, or be implemented in one or several integrated circuits, comprising one or more processor”), wherein the edge device stores the user model, and wherein the user model is generated by capturing data during monitoring of the activities being performed at the user device (see claim 15; “A system for monitoring behavior patterns, comprising: a processor configured for monitoring a user activity; a storage location for logging user activity.. said processor logging said activity information in said user profile” see also para [0015]; “Patterns of behavior can be established over time in a number of ways. FIG. 1, is an example of some of the ways where information about behavior patterns can be collected. The examples provided in FIG. 1 only address a few ways of collecting information, with the understanding that many other methods are available as can be appreciated by those skilled in the art. In the example used in FIG. 1, the few methods shown include gathering of information through user input 110, actuation 120, sensors 130, and on-line resources such as social media 140. Information may already exist and previously collected and stored in a user profile 150”, Note; it stores and models behavior patterns).
Regarding claim 7, the rejection of claim 1 is incorporated herein.
Barnes et al. in the combination further teach wherein the data source comprises a social networking device (see para [0050]; “The other data 126 can include, but is not limited to, social networking information associated with a user of the user device 102”), wherein a data stream from the user device is streamed to the social networking device (see para [0056]; “Thus, the monitoring data 136 can include, streaming video, audio, or image data”), and wherein the information comprises an indication that the threat exists at the user device based on a social media user viewing the data stream (see para [0067]; “the event detected in operation 202 can include, for example.. communications between the user device 102 and one or more devices, systems, networks, or the like”).
Regarding claim 8, scope of claim 1 is fully incorporated herein and the rejection of
claim 1 analysis is equally applicable.
Regarding claim 11, the rejection of claim 8 is incorporated herein.
Barnes et al. in the combination further teach wherein triggering delivery of the alert comprises: identifying a geographic location of the user device; identifying, based on the geographic location, a plurality of devices that are located in proximity to the user device, the plurality of devices comprising the other device; and triggering the delivery of the alert to the other device (see para [0044]; “Thus, the event definitions 120 can include, for example, data defining geographic locations at which monitoring is to be initiated or terminated, data defining times or dates at which monitoring is to be initiated or terminated, location demographics and/or other considerations that, if met by a current location of the user device 102 and/or a user associated with the user device 102, are to initiate and/or terminate monitoring, or the like. The event definitions 120 also can include data that defines other situations or circumstances in which monitoring is to be initiated and/or terminated. [0045] The other situations or circumstances can include, for example, a number of users or devices in a proximity of the user device 102, a geographic location of the user device 102, a particular frequency and/or volume of audio signals at or near the user device 102”).
Regarding claim 12, the rejection of claim11 is incorporated herein.
Barnes et al. in the combination further teach further comprising: in response to determining that the alert should be cancelled, cancelling the alert (see para [01134]; “The monitoring alert screen 412 also can include a UI control 418 for indicating that a user or other entity wishes to dismiss or ignore the monitoring alert 142”), wherein determining that the alert should be cancelled comprises determining that the other device is no longer in proximity to the user device (see para [0009]; “The server computer can identify monitoring hardware and/or software in a proximity of the user device. As used herein, monitoring hardware may be deemed to be "within a proximity" of the user device if the user device is within a detection range of the monitoring hardware. Because various sorts of monitoring hardware may be far away from the user device (e.g., satellites, drones, or other space- or air-located devices), the "proximity" of monitoring hardware can be determined by detection, not spatial relationships. The server computer can be configured to search for monitoring hardware capable of monitoring the user device until the monitoring hardware is identified or the user device no longer requests the monitoring”).
Regarding claim 13, the rejection of claim11 is incorporated herein.
Barnes et al. in the combination further teach further comprising: in response to determining that the alert should be cancelled, cancelling the alert, wherein determining that the alert should be cancelled (see para [01134]; “The monitoring alert screen 412 also can include a UI control 418 for indicating that a user or other entity wishes to dismiss or ignore the monitoring alert 142”) comprises receiving a notification that help is no longer needed at the user device (see para [0140]; “According to various embodiments, the display 702 can be configured to display monitoring information, monitoring alerts, account information, various graphical user interface ("GUI") elements, text, images, video, virtual keypads and/or keyboards, messaging data, notification messages”).
Regarding claim 14, scope of claim 1 is fully incorporated herein and the rejection of
claim 1 analysis is equally applicable.
Regarding claim 17, the rejection of claim 14 is incorporated herein.
Barnes et al. in the combination further teach wherein the data source comprises a social networking device (see para [0050]; “social networking information associated with a user of the user device 102”), and wherein the information comprises an indication that the threat exists at the user device (see para [0008]; “By prompting users when particular events are detected, the monitoring service can offer proactive monitoring before a user even realizes that a threat or emergency situation exists”).
Regarding claim 19, the rejection of claim 14 is incorporated herein.
Barnes et al. in the combination further teach user model is obtained from the edge device (see para [0011]; “the represented blocks are purely functional entities, which do not necessarily correspond to physically separate entities. Namely, they could be developed in the form of software, hardware, or be implemented in one or several integrated circuits, comprising one or more processor”),, wherein the edge device stores the user model, and wherein the user model is generated by capturing data during monitoring of the activities being performed at the user device (see claim 15; “A system for monitoring behavior patterns, comprising: a processor configured for monitoring a user activity; a storage location for logging user activity.. said processor logging said activity information in said user profile” see also para [0015]; “Patterns of behavior can be established over time in a number of ways. FIG. 1, is an example of some of the ways where information about behavior patterns can be collected. The examples provided in FIG. 1 only address a few ways of collecting information, with the understanding that many other methods are available as can be appreciated by those skilled in the art. In the example used in FIG. 1, the few methods shown include gathering of information through user input 110, actuation 120, sensors 130, and on-line resources such as social media 140. Information may already exist and previously collected and stored in a user profile 150”, Note; it stores and models behavior patterns).
Claims 2, 5, 9-10, 15-16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. and Schmouker et al. in view of Freeman et al. as in claim 1 above, and further in view of Laska et al. (US 20170046574 A1).
Regarding claim 2, the rejection of claim 1 is incorporated herein. The combination of Barnes et al., Schmouker et al. and Free man et al. does not teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: determining, during the monitoring, that the monitoring has not been deactivated at the user device, determining, during the monitoring, that the time period has lapsed; and analyzing, by the edge device, the video in response to determining that the time period has lapsed without the monitoring being deactivated at the user device, wherein the video is analyzed to determine if a threat at the user device is detected based on the video.
In the same field of endeavor Laska et al. teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: determining, during the monitoring, that the monitoring has not been deactivated at the user device (see para [0007]; “while receiving video information from one or more cameras, the video information including a video stream”, see also para [0020]; “the method further includes checking, by the first categorizer, for additional segments of the video stream until a motion end event occurs”); determining, during the monitoring, that the time period has lapsed (see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”, see also para [0243; “the motion end information is based on a change in the motion detected within the video stream. The motion end information is, optionally, generated when the amount of motion detected within the video stream falls below a threshold amount (e.g., the dotted line shown in the graphs of FIG. 11C) or declines steeply”); and analyzing, by the edge device, the video in response to determining that the time period has lapsed without the monitoring being deactivated at the user device (see para [0244]; “Once the motion end information is obtained, the final segment is processed and categorized”), wherein the video is analyzed to determine if a threat at the user device is detected based on the video (see para [0244]; “In some implementations, an alert is generated and sent to the client device”, see also para [0220]; “the video data is initially processed at the video source 522”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to reduce latency and bandwidth and enables alert when connectivity is spotty (see para [0007]).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
Barnes et al. in the combination further teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: detecting selection, at the user device, of a control to begin monitoring of the user device after the time period if the monitoring is not deactivated at the user device before the time period lapses (see para [0052]; “the event detected in operation 202 can include, for example…selection of a panic button or other UI control for requesting monitoring”, see also para [0098]; “The monitoring service setting 404A can be used to turn on or turn off the use of event-based monitoring. In the illustrated embodiment, a user can select or de-select the UI control 406A to activate or deactivate the use of event-based monitoring. It can be appreciated from the description herein that the selection or de-selection of the UI control 406A can activate or deactivate the monitor application 108 and/or the monitoring service 110 from using events occurring at the user device 102 and/or events associated with one or more users of the user device 102 to generate monitoring requests 128 and/or to configure or change settings associated with the monitor application 108 and/or the monitoring service 110”).
Laska et al. in the combination further determining that the time period lapsed (see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”); and continuing the monitoring after the time period in response to determining that the monitoring was not deactivated at the user device before the time period lapsed (see Fig. 14B element 1426-1428, para [0024]; “while receiving the video stream that includes the second motion event candidate, segmenting the video stream into a second plurality of segments”, see also para [0325]; “the server system (1408): (1) identifies a third location in the video stream; (2) in accordance with a determination that a predefined amount of time has lapsed, identifies a fourth location in the video stream; and (3) generates a segment corresponding to the portion of the video stream between the third location and the fourth location. In accordance with some implementations, the server system 508 in FIG. 11A utilizes the event processor to segment the video stream into segments with predetermined durations” and para [0120] disclose the user devise aspect, Note: implies continued monitoring beyond the first window and only continue if monitoring not deactivated). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to accurately identify and categorize meaningful segments of a video stream in an efficient, intuitive, and convenient manner (see para [0024]).
Regarding claim 9, the rejection of claim 8 is incorporated herein.
Laska et al. in the combination further teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: determining, during the monitoring, that the monitoring has not been deactivated at the user device (see para [0007]; “while receiving video information from one or more cameras, the video information including a video stream”, see also para [0020]; “the method further includes checking, by the first categorizer, for additional segments of the video stream until a motion end event occurs”); determining, during the monitoring, that the time period has lapsed (see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”, see also para [0243; “the motion end information is based on a change in the motion detected within the video stream. The motion end information is, optionally, generated when the amount of motion detected within the video stream falls below a threshold amount (e.g., the dotted line shown in the graphs of FIG. 11C) or declines steeply”); and analyzing, by the edge device, the video in response to determining that the time period has lapsed without the monitoring being deactivated at the user device (see para [0244]; “Once the motion end information is obtained, the final segment is processed and categorized”), wherein the video is analyzed to determine if a threat at the user device is detected based on the video (see para [0244]; “In some implementations, an alert is generated and sent to the client device”, see also para [0220]; “the video data is initially processed at the video source 522”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to reduce latency and bandwidth and enables alert when connectivity is spotty (see para [0007]).
Regarding claim 10, the rejection of claim 8 is incorporated herein.
Barnes et al. in the combination further teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: detecting selection, at the user device, of a control to begin monitoring of the user device after the time period if the monitoring is not deactivated at the user device before the time period lapses (see para [0052]; “the event detected in operation 202 can include, for example…selection of a panic button or other UI control for requesting monitoring”, see also para [0098]; “The monitoring service setting 404A can be used to turn on or turn off the use of event-based monitoring. In the illustrated embodiment, a user can select or de-select the UI control 406A to activate or deactivate the use of event-based monitoring. It can be appreciated from the description herein that the selection or de-selection of the UI control 406A can activate or deactivate the monitor application 108 and/or the monitoring service 110 from using events occurring at the user device 102 and/or events associated with one or more users of the user device 102 to generate monitoring requests 128 and/or to configure or change settings associated with the monitor application 108 and/or the monitoring service 110”).
Laska et al. in the combination further determining that the time period lapsed (see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”); and continuing the monitoring after the time period in response to determining that the monitoring was not deactivated at the user device before the time period lapsed. (see Fig. 14B element 1426-1428, para [0024]; “while receiving the video stream that includes the second motion event candidate, segmenting the video stream into a second plurality of segments”, see also para [0325]; “the server system (1408): (1) identifies a third location in the video stream; (2) in accordance with a determination that a predefined amount of time has lapsed, identifies a fourth location in the video stream; and (3) generates a segment corresponding to the portion of the video stream between the third location and the fourth location. In accordance with some implementations, the server system 508 in FIG. 11A utilizes the event processor to segment the video stream into segments with predetermined durations” Note: implies continued monitoring beyond the first window and only continue if monitoring not deactivated). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to accurately identify and categorize meaningful segments of a video stream in an efficient, intuitive, and convenient manner (see para [0024]).
Regarding claim 15, the rejection of claim 14 is incorporated herein.
Laska et al. in the combination further teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: determining, during the monitoring, that the monitoring has not been deactivated at the user device (see para [0007]; “while receiving video information from one or more cameras, the video information including a video stream”, see also para [0020]; “the method further includes checking, by the first categorizer, for additional segments of the video stream until a motion end event occurs”); determining, during the monitoring, that the time period has lapsed (see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”, see also para [0243; “the motion end information is based on a change in the motion detected within the video stream. The motion end information is, optionally, generated when the amount of motion detected within the video stream falls below a threshold amount (e.g., the dotted line shown in the graphs of FIG. 11C) or declines steeply”); and analyzing, by the edge device, the video in response to determining that the time period has lapsed without the monitoring being deactivated at the user device (see para [0244]; “Once the motion end information is obtained, the final segment is processed and categorized”), wherein the video is analyzed to determine if a threat at the user device is detected based on the video (see para [0244]; “In some implementations, an alert is generated and sent to the client device”, see also para [0220]; “the video data is initially processed at the video source 522”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to reduce latency and bandwidth and enables alert when connectivity is spotty (see para [0007]).
Regarding claim 16, the rejection of claim 14 is incorporated herein.
Laska et al. in the combination further teach wherein the video is analyzed at the edge device by applying, to the video, machine learning and artificial intelligence to determine if the threat is detected in the video (see para [0258]; “automatically adjust (e.g., create and retire) event categories through machine learning based on the actual video data that is received over time”, see also para [0220]; “After video data is captured at the video source 522 (1113), the video data is processed to determine if any potential motion event candidates are present in the video stream”), wherein the edge device stores the user model and wherein the user model is generated by monitoring the activities being performed at the user device (para [0197]; “Device-side module 932, which provides device-side functionalities for device control, data processing and data review, including.. Local data storage database 9342 for selectively storing raw or processed data associated with the smart device 204”, see also para [0380]; “the event categorizer retrieves the one or more event categorization models from a database, such as event categorization models database 1108”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to reduce latency and bandwidth and enables alert when connectivity is spotty (see para [0258]).
Regarding claim 18, the rejection of claim 5 is incorporated herein.
Barnes et al. in the combination further teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: detecting selection, at the user device, of a control to begin monitoring of the user device after the time period if the monitoring is not deactivated at the user device before the time period lapses (see para [0052]; “the event detected in operation 202 can include, for example…selection of a panic button or other UI control for requesting monitoring”, see also para [0098]; “The monitoring service setting 404A can be used to turn on or turn off the use of event-based monitoring. In the illustrated embodiment, a user can select or de-select the UI control 406A to activate or deactivate the use of event-based monitoring. It can be appreciated from the description herein that the selection or de-selection of the UI control 406A can activate or deactivate the monitor application 108 and/or the monitoring service 110 from using events occurring at the user device 102 and/or events associated with one or more users of the user device 102 to generate monitoring requests 128 and/or to configure or change settings associated with the monitor application 108 and/or the monitoring service 110”).
Laska et al. in the combination further determining that the time period lapsed (see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”); and continuing the monitoring after the time period in response to determining that the monitoring was not deactivated at the user device before the time period lapsed (see Fig. 14B element 1426-1428, para [0024]; “while receiving the video stream that includes the second motion event candidate, segmenting the video stream into a second plurality of segments”, see also para [0325]; “the server system (1408): (1) identifies a third location in the video stream; (2) in accordance with a determination that a predefined amount of time has lapsed, identifies a fourth location in the video stream; and (3) generates a segment corresponding to the portion of the video stream between the third location and the fourth location. In accordance with some implementations, the server system 508 in FIG. 11A utilizes the event processor to segment the video stream into segments with predetermined durations” Note: implies continued monitoring beyond the first window and only continue if monitoring not deactivated).
Regarding claim 20, the rejection of claim 14 is incorporated herein.
Laska et al. in the combination further teach wherein the computer-executable instructions, when executed by the processor, cause the processor to perform operations further comprising: determining, during the monitoring, that the monitoring has not been deactivated (see para [0007]; “while receiving video information from one or more cameras, the video information including a video stream”, see also para [0020]; “the method further includes checking, by the first categorizer, for additional segments of the video stream until a motion end event occurs”); determining, during the monitoring, that the time period has lapsed
(see para [0015]; “in accordance with a determination that a predefined amount of time has lapsed”, see also para [0243; “the motion end information is based on a change in the motion detected within the video stream. The motion end information is, optionally, generated when the amount of motion detected within the video stream falls below a threshold amount (e.g., the dotted line shown in the graphs of FIG. 11C) or declines steeply”); and analyzing, by the edge device, the video in response to determining that the time period has lapsed without the monitoring being deactivated (see para [0244]; “Once the motion end information is obtained, the final segment is processed and categorized”), wherein the video is analyzed to determine if a threat at the user device is detected based on the video (see para [0244]; “In some implementations, an alert is generated and sent to the client device”, see also para [0220]; “the video data is initially processed at the video source 522”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a remotely activated monitoring service detect an event associated with a user device, and issue a command to initiate monitoring of the proximity of the user device of Barnes et al. in view of a method for detecting a behavior patterns of a user of Schmouker et al. further in view of compact video image recording device for recording video images before and after a triggering event of Freeman et al. and a method for categorizing motion events of Laska et al. in order to reduce latency and bandwidth and enables alert when connectivity is spotty (see para [0007]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/ Examiner, Art Unit 2677