Prosecution Insights
Last updated: April 19, 2026
Application No. 18/365,251

METHOD AND SYSTEM FOR ANALYTICS SELECTION BASED ON PURSUIT-CONTEXT DETAILS

Non-Final OA §103
Filed
Aug 04, 2023
Examiner
SMALL, NAOMI J
Art Unit
2685
Tech Center
2600 — Communications
Assignee
Motorola Solutions Inc.
OA Round
3 (Non-Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
496 granted / 778 resolved
+1.8% vs TC avg
Strong +24% interview lift
Without
With
+24.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
807
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 778 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 31, 2026 has been entered. Claims 1, 8, 10 and 11 have been amended. Claims 9 and 15 have been cancelled. Claims 16-19 have been newly added. Claims 1-8, 10-14 and 16-19 are currently pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 8, 10-12, 14 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cutcher et al. (Cutcher; US Patent No. 10,388,132 B2) in view of Merchant (US Pub No. 2023/0186670 A1). As per claim 1, Cutcher teaches a computer-implemented method comprising: tracking, using an least one processor, a pursuer object and a pursued object (col. 7, lines 47-65; col. 8, line 55), wherein the pursuer object is engaged in movement consistent with potentially entering a Field Of View (FOV) of at least one fixed-location camera at a future point in time (Fig. 4 & 5, Patrol Object 104, Field of View 112, Camera 120); using analytics on first video or image data (col. 5, lines 25-28: suspect located in area to be searched by patrol). Cutcher does not expressly teach using analytics on first video or image data corresponding to at least the pursuer object to determine a plurality of pursuit-context details; matching, using the at least one processor, of the pursuit-context details to [[a]] effect selection specific behavior-targeted analytic from a plurality of object behavior-targeted analytics; after the selection of the specific behavior-targeted analytic specific behavior-targeted analytic[[s]] to act on second video or image data captured by the at least one fixed-location camera in generating information relevant to a potential pursuer confrontation making the generated information electronically accessible to a mobile electronic device prior to the potential pursuer confrontation wherein the analytics is further used to determine, based on a change in a behavior of the pursued object, an increase in a threat level to the pursuer object. Merchant teaches using analytics on first video or image data corresponding to at least the pursuer object to determine a plurality of pursuit-context details (paragraph [0016], lines 8-15: location, environment; paragraph [0020], lines 48-49: length of time); matching, using the at least one processor, of the pursuit-context details to [[a]] effect selection specific behavior-targeted analytic from a plurality of object behavior-targeted analytics (paragraph [0021], lines 21-51: once a change in a field of view has been detected monitoring a specific threat); after the selection of the specific behavior-targeted analytic specific behavior-targeted analytic[[s]] to act on second video or image data captured by the at least one fixed-location camera in generating information relevant to a potential pursuer confrontation (paragraph [0044]); and making the generated information electronically accessible to a mobile electronic device prior to the potential pursuer confrontation (paragraph [0015]) wherein the analytics is further used to determine, based on a change in a behavior of the pursued object, an increase in a threat level to the pursuer object (paragraph [0021]). It would have been obvious to one having ordinary skill in the art at the time the invention was effectively filed to implement the threat detection as taught by Merchant, since Merchant state in paragraph [0015] that such a modification would result in notifying law enforcement of a situation which requires police attention. As per claim 2, Cutcher in view of Merchant further teaches the computer-implemented method of claim 1 wherein the pursuer object is a police officer (Cutcher, col. 5, lines 4-5) and the pursued object is a suspected perpetrator of an offence (Cutcher, col. 2, lines 5-6). As per claim 3, Cutcher in view of Merchant further teaches the computer-implemented method of claim 1 further comprising automatically generating an audibly or visibly perceivable alert on the mobile electronic device (Cutcher, col. 8, lines 12-15). As per claim 4, Cutcher in view of Merchant further teaches the computer-implemented method of claim 3 wherein the generating of the audibly or visibly perceivable alert (Cutcher, col. 8, lines 12-15) is in response to detecting that a following distance (Cutcher, col. 7, lines 7-8: patrol enters a location, therefore, the patrol is following behind the suspect), determined based on a geographic-following separation of the pursued object relative to the pursuer object, has dropped below an alert-triggering threshold (Cutcher, col. 2, lines 14-36; col. 7, lines 47-65). As per claim 5, Cutcher in view of Merchant further teaches the computer-implemented method of claim 3 wherein the generating of the audibly or visibly perceivable alert (Cutcher, col. 8, lines 12-15) is in response to detecting that an approaching distance (Cutcher, col. 1, lines 64-65: approaching), determined based on a geographic-approaching separation of the pursuer object relative to the pursued object, has dropped below an alert-triggering threshold (Cutcher, col. 2, lines 14-36; col. 7, lines 47-65). As per claim 8, Cutcher in view of Merchant further teaches the computer-implemented method of claim 1 wherein: the first video or image data corresponds to both the pursued object and the pursuer object (Merchant, paragraph [0015]), and the analytics is further used to determine a change in mobility potential of the pursued object (Cutcher, col. 7, lines 47-65: suspect concealing themself). As per claim 10, Cutcher in view of Merchant further teaches the computer-implemented method of claim [[9]] 1 further comprising automatically generating, on the mobile electronic device, an audibly or visibly perceivable alert corresponding to the (Cutcher, col. 8, lines 4-15: difference in images indicating the suspect is concealing themself). As per claim 11, Cutcher teaches a system comprising: at least one processor (col. 4, line 41); at least one fixed-location camera configured to capture video or image data (Fig. 1, Camera 102); and at least one electronic storage medium storing program instructions that when executed by the at least one processor cause the at least one processor to perform (col. 4, lines 41-56): tracking a pursuer object and a pursued object (col. 7, lines 47-65; col. 8, line 55), wherein the pursuer object is engaged in movement consistent with potentially entering a Field Of View (FOV) of the at least one fixed-location camera at a future point in time (Fig. 4 & 5, Patrol Object 104, Field of View 112, Camera 120); analyzing additional video or image data (col. 5, lines 25-28: suspect located in area to be searched by patrol). Cutcher does not expressly teach analyzing additional video or image data corresponding to at least the pursuer object to determine a plurality of pursuit-context details; matching of the pursuit-context details to [[a]] effect selection specific behavior-targeted analytic from a plurality of object behavior-targeted analytics; after the selection of the specific behavior-targeted analytic specific behavior-targeted analytic[[s]] to act on second video or image data captured by the at least one fixed-location camera in generating information relevant to a potential pursuer confrontation making the generated information electronically accessible to a mobile electronic device prior to the potential pursuer confrontation wherein the analyzing further includes having analytics determine, based on a change in a behavior of the pursued object, an increase in a threat level to the pursuer object. Merchant teaches analyzing additional video or image data corresponding to at least the pursuer object to determine a plurality of pursuit-context details (paragraph [0016], lines 8-15: location, environment; paragraph [0020], lines 48-49: length of time); matching of the pursuit-context details to [[a]] effect selection specific behavior-targeted analytic from a plurality of object behavior-targeted analytics (paragraph [0021], lines 21-51: once a change in a field of view has been detected monitoring a specific threat); after the selection of the specific behavior-targeted analytic specific behavior-targeted analytic[[s]] to act on second video or image data captured by the at least one fixed-location camera in generating information relevant to a potential pursuer confrontation (paragraph [0044]); and making the generated information electronically accessible to a mobile electronic device prior to the potential pursuer confrontation (paragraph [0015]) wherein the analyzing further includes having analytics determine, based on a change in a behavior of the pursued object, an increase in a threat level to the pursuer object (paragraph [0021]). It would have been obvious to one having ordinary skill in the art at the time the invention was effectively filed to implement the threat detection as taught by Merchant, since Merchant state in paragraph [0015] that such a modification would result in notifying law enforcement of a situation which requires police attention. As per claim 12, (see rejection of claim 2 above) the system of claim 11 wherein the pursuer object is a police officer and the pursued object is a suspected perpetrator of an offence. As per claim 14, (see rejection of claim 8 above) the system of claim 11 wherein: the additional video or image data corresponds to both the pursued object and the pursuer object, and the at least one processor is further caused to perform determining a change in mobility potential of the pursued object. As per claim 16, Cutcher in view of Merchant further teaches the computer-implemented method of claim 1 wherein the specific behavior-targeted analytic is ambush detection, and the change in the behavior of the pursued object is consistent with a potential ambush (Cutcher, col. 7, lines 47-65: suspect concealing themself). As per claim 17, Cutcher in view of Merchant further teaches the computer-implemented method of claim 16 further comprising automatically generating an audibly or visibly perceivable ambush alert on the mobile electronic device (Cutcher, col. 8, lines 28-40). As per claim 18, Cutcher in view of Merchant further teaches the system of claim 11 wherein the specific behavior-targeted analytic is ambush detection, and the change in the behavior of the pursued object is consistent with a potential ambush (Cutcher, col. 7, lines 47-65: suspect concealing themself). As per claim 19, Cutcher in view of Merchant further teaches the system of claim 18 wherein the at least one processor is further caused to perform generating an audibly or visibly perceivable ambush alert on the mobile electronic device (Cutcher, col. 8, lines 28-40). Claim(s) 6, 7 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cutcher in view of Merchant as applied above, and further in view of Soo et al. (Soo; US Patent No. 11,417,105 B1). As per claim 6, Cutcher in view of Merchant teaches the computer-implemented method of claim 1. Cutcher in view of Araya does not expressly teach wherein: the first video or image data is video data captured from another fixed-location camera, and the analytics is configured to generate a notification of the pursuer object leaving an FOV of the another fixed-location camera when the pursuer object moves into a defined exit region of the FOV of the another fixed-location camera. Soo teaches wherein: the first video or image data is video data captured from another fixed-location camera (col. 3, line 32: multiple cameras), and the analytics is configured to generate a notification of the pursuer object leaving an FOV of the another fixed-location camera when the pursuer object moves into a defined exit region of the FOV of the another fixed-location camera (col. 2, lines 41-51). It would have been obvious to one having ordinary skill in the art at the time the invention was effectively filed to implement the multiple cameras and monitoring users entering and exiting a field of view of each camera as taught by Soo, since Soo states in column 2, lines 41-65 that such a modification would result in determining in which direction a suspect is travelling. As per claim 7, Cutcher in view of Merchant, and further in view of Soo, further teaches the computer-implemented method of claim 6 further comprising automatically displaying, on a screen of the mobile electronic device and based on the notification (Soo, col. 2, lines 49-51), live video captured by the at least one fixed-location camera (Merchant, paragraph [0031]). As per claim 13, (see rejection of claim 6 above) the system of claim 11 wherein: the additional video or image data is video data captured from another fixed-location camera, and the at least one processor is further caused to perform generating a notification of the pursuer object leaving an FOV of the another fixed-location camera when the pursuer object moves into a defined exit region of the FOV of the another fixed-location camera. Response to Arguments Applicant’s arguments with respect to the above claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAOMI J SMALL whose telephone number is (571)270-5184. The examiner can normally be reached Monday-Friday 8:30AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Quan-Zhen Wang can be reached at 571-272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAOMI J SMALL/ Primary Examiner, Art Unit 2685
Read full office action

Prosecution Timeline

Aug 04, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Jul 07, 2025
Interview Requested
Jul 17, 2025
Applicant Interview (Telephonic)
Aug 01, 2025
Response Filed
Aug 09, 2025
Examiner Interview Summary
Nov 01, 2025
Final Rejection — §103
Jan 31, 2026
Request for Continued Examination
Feb 01, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600362
DIAGNOSIS APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12591943
Systems and Methods for Guiding Pedestrians to Balance Congestion
2y 5m to grant Granted Mar 31, 2026
Patent 12583472
DRIVING ASSISTANCE DEVICE, VEHICLE, METHOD, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585429
METHODS AND SYSTEMS FOR INTERACTING WITH AUDIO EVENTS VIA MOTION INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12567318
HELMET, METHOD AND SERVER FOR DETECTING A LIKELIHOOD OF AN ACCIDENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
88%
With Interview (+24.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 778 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month