Prosecution Insights
Last updated: April 19, 2026
Application No. 18/202,384

METHOD AND SYSTEM FOR MONITORING ACTIVITIES AND EVENTS IN REAL-TIME THROUGH SELF-ADAPTIVE AI

Non-Final OA §103
Filed
May 26, 2023
Examiner
ZEWEDE, ASTEWAYE GETTU
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Skylark Labs Inc.
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
36 granted / 45 resolved
+22.0% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
18 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
0.7%
-39.3% vs TC avg
§103
67.0%
+27.0% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the application filed on 01/02/2026. Clams 1, 3-5, 7-9, 11-13, 15-17, and 19-20 have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/26/2023. The submission is following the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response Amendments/Arguments Applicant’s Amendment filed on January 02, 2026, has been entered and made of record. Claims1, 9, 1, 2, 6, 10, 14, and 18 have been amended, claims 2, 6, 10, 14, and 18 have been canceled. Accordingly, Clams 1, 3-5, 7-9, 11-13, 15-17, and 19-20 remain pending. Response to Arguments Applicant’s arguments, as set forth in the Remark on pages 11-18 filed January 02, 2026, with respect to the rejection(s) of claim(s) 1, 9, 17, 2, 6, 10, 14, and 18 have also been fully considered in view of the amendment to the claims and are found somewhat persuasive. Accordingly, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of D Kim, “D., Kim, H., Mok, Y., & Paik, J. (2021). Real-Time Surveillance System for Analyzing Abnormal Behavior of Pedestrians. Applied Sciences, 11(13), 6153. https://doi.org/10.3390/app11136153.” hereinafter “Kim”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 5, 7-9, 11, 13, 15-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rodenas et al (US 20180300557 A1) hereinafter “Rodenas” in view of Kim, D., “Kim, D., Kim, H., Mok, Y., & Paik, J. (2021). Real-Time Surveillance System for Analyzing Abnormal Behavior of Pedestrians. Applied Sciences, 11(13), 6153. https://doi.org/10.3390/app11136153 ” hereinafter “Kim”. Regarding Claim 1 Rodenas-Kim Rodenas discloses A method for monitoring activities and events in real-time, (Rodenas Fig. 1 A [0016] “… a number of people 102 are located in an area including a set of video cameras 104, 106…” see Fig. 1A) the method comprising: receiving video data of an area from each of one or more cameras, for each frame of the plurality of frames; (Rodenas discloses video analysis via a surveillance system capturing video input and processing it in frames. See ¶ [0014], ¶ [0020]). for each frame of the plurality of frames, generating in real-time, by an Artificial Intelligence (Al) model, a space- time-behaviour dataset corresponding to the frame, wherein the space-time- behaviour dataset comprises spatial data, temporal data, and behavioral data corresponding to the area, ( Rodenas, [0020]“The location and variation of these objects can also be monitored over time, for correlation as well as to detect patterns of behavior or occurrences involving those objects. This can include, for example, the behavior of a person over a period of time...” Rodenas discloses AI-based real-time processing of each frame, extracting spatial (location), temporal (time-based activity), and behavioral (facial expressions and emotional indicators) data. See ¶ [0020], ¶ [0021], ¶ [0024]).and wherein the behavioural data corresponds to actions and facial expressions of one or more humans present in the frame (Rodenas, [0022]” … want to be able to determine viewers' reactions to the campaign. Accordingly, video data can be captured and analyzed to determine an overall mood of those viewers, as well as how many were happy or angry, or had other specific emotions with respect to the content.” [0023] “..frames of video data can be captured and analyzed in sequence over time to attempt to determine changes in mood or more accurately determine mood by analyzing the same face over a period of time ” ) ; . . . determining in real-time, by the Al model, at least one of a set of cause parameters corresponding to the cause or a set of event parameters corresponding to the event, based on the space-time-behaviour dataset; (Rodenas discloses generating a suspicion score in real-time based on various behavioral indicators such as facial expressions, mood, actions, and behavior. Each of these observed elements is assigned a corresponding score, which is used by the AI model to determine when an action should be taken. This process includes analyzing the behavioral data and evaluating whether the computed suspicion score meets predefined thresholds. Thus, the AI model determines in real-time at least one parameter (e.g., suspicion level) corresponding to the observed cause or event based on the underlying behavioral dataset. Rodenas ¶¶ [0024]- [0025]). comparing in real-time, by the Al model, the set of cause parameters with corresponding cause parameters of the plurality of predefined causes, and the set of event parameters with corresponding event parameters of the plurality of predefined events; ( Rodenas, See Fig. 3 324 ¶[0027]: “ …the object data from the video analyzer can then be passed to a behavior analyzer 322, or other such system or service, can compare the data, and prior data for that object, … against one or more behavioral patterns or criteria…” i.e., the object data from the video analyzer is assigned to a behavioral analyzer , which uses a behavior repository (Fig. 3, 324 ) storing known or learned behaviors. The system (“behavior analyzer 322”), then compares incoming behavioral data against the stored behaviors to determine processing outcomes, such as whether the behavior meets criteria for a certain event or rule.) determining in real-time, by the Al model, whether at least one of the cause or the event corresponds to suspicious activity based on the comparison. (Rodenas discloses in ¶ [0084]) that object result data is “…compared against data from prior frames…” to determine whether the object exhibits “…suspicious or abnormal behavior…). The system checks if the object falls. “…outside a set of expected behaviors…’ applies “…threshold and confidence values...” and then may trigger actions such as “. notifications, tracking, or alerts, …” thereby making a real-time determination of suspicious activity based on the comparison. See that ¶ [0084]).) ; and Rodenas does not explicitly disclose identifying in real-time, by the Al model, at least one of: an event from a plurality of predefined events, or an associated cause of an event from a plurality of predefined causes, based on the space-time-behavior dataset, wherein each of the plurality of predefined causes is associated with a predefined space-time- behavior dataset; storing in real-time, by the Al model, the plurality of predefined events and the associated plurality of predefined causes in at least one repository, wherein the at least one repository comprises a geolocation-specific library corresponding to each of a plurality of suspicious activities and a shared signature library, wherein the geolocation-specific library comprises the predefined space- time-behavior dataset corresponding to a geolocation, and wherein the shared signature library comprises the predefined space-time-behavior dataset corresponding to each of a set of geolocations. However, in the same field of endeavor Kim discloses more explicitly the following: identifying in real-time, by the Al model, (Kim, introduction (page 1 of 16) “We proposed a novel abnormal behavior analysis method…by merging detection, tracking, and action recognition algorithms,,” see also Section 3 “proposed method”), at least one of: an event from a plurality of predefined events, or an associated cause of an event from a plurality of predefined causes, based on the space-time-behavior dataset, wherein each of the plurality of predefined causes is associated with a predefined space-time- behavior dataset; (Kim, Sections 3.3, 3.4, 4.1, and Table 5 Kim discloses identifying, in real time, abnormal events and associated causes from a plurality of predefined events based on predefined space-time-behavior datasets, where each predefined cause is associated with data set defined spatial and temporal behavior criteria. Section 3.3 – Intrusion and Loitering Abnormal Behavior Judgment Algorithm (page 7 of 16), “The intrusion and loitering detection process first checks if an interesting object enters the pre-defined ROI. A pedestrian’s action is classified as loitering if one or more people enter a pre-specified ROI for more than 10 seconds. KISA dataset requires 10 s for authentication of loitering, but user can change the time. An action is classified as intrusion if the entire body of one or more people enter a pre-specified ROI. To analyze the abnormal situation for real-time streaming video using a video transmission server, we developed an algorithm that judges the intrusion and loitering situation based on the referenced evaluation criteria only with object tracking information. The coordinate information of the intrusion and roaming area of the image to be determined is obtained from a predefined Extensible Markup Language (XML) file, and the ROI according to the coordinates is set in the input image. Figure 7 shows two ROIs of intrusion and loitering. The green ROI represents the object detection area. This area is predefined, such as intrusion, loitering ROI, and no object is detected outside of this area.” See Figure 5 below ) PNG media_image1.png 225 556 media_image1.png Greyscale storing in real-time, by the Al model, the plurality of predefined events and the associated plurality of predefined causes in at least one repository, (Kim teaches that predefined abnormal behavior (events/causes) are defined in advance, modeled, and used during real-time operation. Section 3- (proposed method) (Page 3 of 16) discloses predefined a set of abnormal behaviors, including intrusion, loitering, fall-down, and violence, which serve as predefined events and corresponding causes. All Section 3.3 ..4 disclose predefined behavior and causes are maintained and reused by the abnormal behavior analysis framework during real-time processing, which inherently requires storage at least one repository accessible to the AI model. Kim. Section 3.2 Page (6 of 16) “..The abnormal behavior analysis module takes the stored image as an input to analyze the abnormal behavior. “Accordingly, Kim discloses storing predefined events (abnormal behaviors) and their associated causes (behavior models/criteria) for real time use by the AI model.) wherein the at least one repository comprises a geolocation-specific library corresponding to each of a plurality of suspicious activities and a shared signature library, wherein the geolocation-specific library comprises the predefined space- time-behaviour dataset corresponding to a geolocation, and wherein the shared signature library comprises the predefined space-time-behaviour dataset corresponding to each of a set of geolocations. (Kim teaches a geolocation-specific library through location-dependent abnormal definitions. Specifically, Section 3.3 with figure 7 (ROI for intrusion and loitering experimental results) discloses that intrusion and loitering are determined based on predefined ROIs tied to specific camera locations. The ROI definition and the corresponding abnormal behavior criteria differ by monitored location (e.g., entrance, corridors, outdoor areas), meaning the abnormal behavior dataset is specific to a given geolocation. Figure. 12 (intrusion/Loitering Experiment) further illustrates this concept by showing that intrusion and loitering detections rely on ROIs that vary by scene and camera locations. Thus, Kim teaches a geolocation-specific library comprising predefined space-time-behavior dataset corresponding to a particular location and suspicious activities such as intrusion and loitering. Kim also teaches a shared signature library. As disclosed in Section3.4 (Fall-Down and Violence Abnormal Behavior Judgment Algorithm) and illustrated Figure 9, fall-down and violence behavior are trained using general spatial and temporal features and are applied across multiple scenes and camera locations. The same fall-down and violence behavior signature are reused across multiple geolocation, indicating that these behavior signatures are not location specific, but instead shared across multiple geolocations. Thus, Kim teaches a shared signature library comprising predefined space-time-behaviour datasets corresponding to a set of geolocation. See Figure, 12 below) PNG media_image2.png 1169 1429 media_image2.png Greyscale Figure 12. Intrusion/Loitering Experimental Results: (a) Intrusion Experimental Results, (b) Loitering Experimental Results. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Rodenas with Kim to create the system of Rodenas as outlined above so as to incorporate “an event from a plurality of predefined events, or an associated cause of an event from a plurality of predefined causes, based on the space-time-behaviour dataset, wherein each of the plurality of predefined causes is associated with a predefined space-time- behaviour dataset and storing in real-time, by the Al model, the plurality of predefined events and the associated plurality of predefined causes in at least one repository, wherein the at least one repository comprises a geolocation-specific library corresponding to each of a plurality of suspicious activities and a shared signature library, wherein the geolocation-specific library comprises the predefined space- time-behaviour dataset corresponding to a geolocation, and wherein the shared signature library comprises the predefined space-time-behaviour dataset corresponding to each of a set of geolocations.” as suggested by Kim. The reasoning is that to “improve the overall performance of the detection network (Kim, Section 4.1). Note: The motivation that was utilized in the rejection of claim 1, applies equally as well to claims 3,5,7-9, 11,13, 15-17, and 19-20. Regarding claim 3, 11, and 19 Rodenas-Kim Rodenas-Kim discloses 3. (Currently Amended) The method of claim 1 [[2]], further comprising, when the at least one of the cause or the event is determined as a suspicious activity, automatically updating the at least one repository with the set of cause parameters and the set of event parameters. ( Rodenas, [0084”… the system may be used for purposes such as mood determination or tracking rather than security, so the action may be to update a data repository or change an aspect of the content triggering the emotion, etc. The determined action(s) then can be performed 922 as appropriate.”) Regarding Claim 5 and 13 Rodenas-Kim Rodenas-Kim discloses 5. (Currently Amended) The method of claim 1 [[2]],, wherein the at least one repository comprises a signature library corresponding to each of a plurality of suspicious activities. (Kim, Section 3.4, Kim teaches a shared signature library. As disclosed in Section3.4 (Fall-Down and Violence Abnormal Behavior Judgment Algorithm) and illustrated Figure 9, fall-down and violence behavior are trained using general spatial and temporal features and are applied across multiple scenes and camera locations. The same fall-down and violence behavior signature are reused across multiple geolocation, indicating that these behavior signatures are not location specific, but instead shared across multiple geolocations. Thus, Kim teaches a shared signature library comprising predefined space-time-behavior datasets corresponding to a set of geolocation.) Regarding Claim 7, 15, and 20 Rodenas-Kim Rodenas-Kim discloses 7. (Currently Amended) The method of claim 1 [[2]], further comprising: identifying in real-time, by the Al model, one or more humans present in the frame; (Rodenas [0082] “If an evaluation is to be performed, a task-based resource can be allocated 810 to process the individual video frame. In some embodiments, the type of resource allocated can depend at least in part upon the type of processing to perform, and the type of processing to perform can depend at least in part upon the results of the pre-processing. At least one type of recognition analysis can be performed 812 on the video frame using the task-based resource. This can include, for example, object recognition, feature recognition, facial recognition, and the like… The result can be any appropriate result data in any appropriate format as discussed elsewhere herein, as may include information about a type of object of interest, a threat level, a confidence level, location data, and the like … The actions can include, for example, providing identifying information on a display monitor…”) when the at least one of the cause or the event is determined as a suspicious activity, (Rodenas[0079] “FIG. 7B illustrates another example interface 750 that can be provided in accordance with various embodiments. In this example, information is provided specifically for a person who has been flagged as suspicious. This might be provided automatically upon such detection...” See that Fig. 7B) storing identification details of the one or more humans in the at least one repository; (Rodenas, [0079] “information is provided for the person of interest, including a live view … types of suspicious activity or behavior detected, as well as a current threat level, score, or assessment.”) and notifying in real-time, an administrator at a subsequent time instance when in a subsequent frame: (Rodenas [0082] “…. Once processing of the frame has completed…the actions can include, for example, providing identifying information on a display monitor, notifying security personnel, logging occurrence data, or calling an external security source, among other such options.”) the one or more humans are identified through the identification details, and at least one of a premature cause or a cause associated with a suspicious activity is identified. (Rodenas, [0083] “FIG. 9 illustrates an example process 900 for determining actions to take for detected objects or behaviors that can be utilized in accordance with various embodiments. In this example, recognition analysis result data is obtained 902 for a video frame captured using a camera of a video surveillance system. The result data can be analyzed to determine 904 whether there is at least one object of interest represented in the video. This can include, for example, an object of an identified type, a person with a particular type of emotion demonstrated, a person performing a specific type of action, or a person having previously been identified as performing a suspicious activity, among other such options…”) Regarding Claim 8 and 16 Rodenas-Kim Rodenas-Kim discloses 8. The method of claim 1, Furth er comprising, notifying in real-time, an administrator of the determined suspicious activity when the at least one of the cause or the event is determined as a suspicious activity. (Rodenas discloses that upon determining suspicious behavior, the system may take actions such as “generating a notification”, “contacting security personnel,” or “triggering an alert,” thereby providing real-time notification to administrator or authority based on the behavior analysis. See Rodenas ¶ [0084]) Regarding claim 9 Rodenas-Kim Rodenas discloses 9. A system for monitoring activities and events in real-time, [0016] “FIGS. 1A and 1B illustrate an example system for performing video monitoring …” the system comprising: a processor; (Rodenas Fig. 10, processor, 1002 [0085] “FIG. 10 illustrates a set of basic components of a computing device 1000 that can be used to implement aspects of the various embodiments. In this example, the device includes at least one processor 1002 for executing instructions that can be stored in a memory device or element 1004.”) and a memory (Rodenas [0085] “…a memory device or element 1004”) communicatively coupled to the processor ([0085] “…at least one processor 1002,..) wherein the memory stores processor instructions, which when executed by the processor, cause the processor to: ( Rodenas, [0085] “ FIG. 10 illustrates a set of basic components of a computing device 1000 that can be used to implement aspects of the various embodiments. In this example, the device includes at least one processor 1002 for executing instructions that can be stored in a memory device or element 1004.”) The remaining limitations of claim 9 are substantially the same as those in claim 1. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 9. Regarding claim 17 Rodenas-Kim Rodenas discloses 17. A non-transitory computer-readable medium ( Rodenas, [0092] “ Storage media and other non-transitory computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and other non-transitory media, such as, but not limited to, volatile and non-volatile,…” ) storing computer-executable instructions for monitoring activities and events in real-time, the computer-executable instructions configured for: ( Rodenas, [0085] “ FIG. 10 illustrates a set of basic components of a computing device 1000 that can be used to implement aspects of the various embodiments … for executing instructions that can be stored in a memory device or element 1004.” ) The remaining limitations of claim 17 are substantially the same as those in claim 1. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 17. Claim Rejections - 35 USC § 103 Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Rodenas-Kim in view of Baughman et al (US 20180217808 A1) hereinafter “Baughman”. Regarding claim 4 and 12 Rodenas-Kim-Baughman Rodenas-Kim discloses 4. The method of claim 3, Rodenas-Kim does not disclose further comprising self-adaptively training the Al model using the at least one updated repository wherein adaptively training comprises modifying in real-time or near real-time, a set of parameters of the Al model based on the at least one updated repository. However, in the same field of endeavor Baughman discloses more explicitly the following: further comprising self-adaptively training the Al model using the at least one updated repository (Fig. 1 130, repository, ) wherein adaptively training comprises modifying in real-time or near real-time, a set of parameters of the Al model based on the at least one updated repository. (Baughman, [0033] (“…repository 130 can be updated in real-time to reflect the changes in object/sound associations to user responses, as machine learning program 300 learns user responses…”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Rodenas-Kim with Baughman to create the system of Rodenas-Kim as outlined above in order to “further comprising self-adaptively training the Al model using the at least one updated repository wherein adaptively training comprises modifying in real-time or near real-time, a set of parameters of the Al model based on the at least one updated repository.” as suggested by Baughman. The reasoning is that “ the system may be used for purposes such as mood determination or tracking rather than security, so the action may be to update a data repository or change an aspect of the content triggering the emotion. The determined action(s) then can be performed 922 as appropriate.” (Rodenas, [0084]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASTEWAYE GETTU ZEWEDE whose telephone number is (703)756-1441. The examiner can normally be reached Mo-Fr 8:30 am to 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASTEWAYE GETTU ZEWEDE/ Examiner, Art Unit 2481 /WILLIAM C VAUGHN JR/Supervisory Patent Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Apr 29, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Sep 29, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598390
CONTROL APPARATUS, IMAGING APPARATUS, AND LENS APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587663
SLIDING-WINDOW RATE-DISTORTION OPTIMIZATION IN NEURAL NETWORK-BASED VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12537980
Attention Based Context Modelling for Image and Video Compression
2y 5m to grant Granted Jan 27, 2026
Patent 12470842
MULTIFOCAL CAMERA BY REFRACTIVE INSERTION AND REMOVAL MECHANISM
2y 5m to grant Granted Nov 11, 2025
Patent 12470679
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND DISPLAY SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month