Prosecution Insights
Last updated: April 19, 2026
Application No. 17/823,590

INCIDENT SLICING FOR PREVENTION OF SOCIAL AND VIRTUAL THREATS

Non-Final OA §102§103
Filed
Aug 31, 2022
Examiner
TILLERY, RASHAWN N
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
AT&T Intellectual Property I, L.P.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+9.5% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
32 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This communication is responsive to the application filed 8/31/2022. 2. Claims 1-2, 5-17 and 19-23 are pending in this application. Claims 1, 16 and 19 are independent claims. In the instant Amendment, claims 3-4 and 18 were canceled and claims 21-23 were added. This action is made Non-Final. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claim(s) 1-2, 5-14, 16-17 and 19-22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sarhan (US 2012/0133774). Regarding claim 1, Sarhan discloses a system, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations (see fig 1 where computer is shown), comprising: identifying incident locations associated with respective incident risks based on respective user activities associated with the incident locations being determined to satisfy an incident risk criterion (see paragraphs [0020] and [0024]-[0025]; e.g., “For each location X, we need to provide all possible threats to be detected (vehicle detection, face detection, face recognition, motion detection, etc.) and their associated threat levels. To meet the I/O bandwidth requirements, the storage subsystem 20, which is in communication with servers 14 and/or monitoring system 16 is ideally an advanced storage system such as RAID (Redundant Array of Individual Disks)”); based on the identifying, determining targeted surveillance protocols for the incident locations comprising determining the targeted surveillance protocols based on respective types of the respective incident risks associated with the incident locations and respective severities of the respective incident risks, wherein utilizing the targeted surveillance protocols comprises utilizing respective surveillance services facilitated by resources of a communication network (see paragraphs [0008], [0020]-[0021], [0025], [0043] and [0052]; e.g., “Processing servers 14 receive video streams from the video sources 12. The servers 14 process and filter the received video streams by executing one or more computer vision algorithms and control various associated video sources by running dynamic bandwidth allocation protocols. They also help in archiving the received video streams. Dynamic bandwidth allocation is based on potential threat level (increased threat level increased bandwidth allocated for a video source), placement of video sources, location importance, site map, accuracy functions of computer vision algorithms, and dynamic network behavior.”); and controlling performance of the targeted surveillance protocols utilizing the respective surveillance services and the resources (see paragraphs [0021], [0025], [0043] and [0052]; e.g., “bandwidth for a particular video source may be reduced when its remaining battery level is low, especially, if the source is determined not to be very important at that time”). Regarding claim 2, Sarhan discloses wherein the resources comprise network resources of the communication network and wherein determining the targeted surveillance protocols comprises determining an allocation of the network resources for performance of the surveillance services by the communication network based on the respective types of the respective incident risks respectively associated with the incident locations and the respective severities of the respective incident risks (see paragraphs [0008], [0020]-[0021], [0025], [0043] and [0052]; e.g., “Processing servers 14 receive video streams from the video sources 12. The servers 14 process and filter the received video streams by executing one or more computer vision algorithms and control various associated video sources by running dynamic bandwidth allocation protocols. They also help in archiving the received video streams. Dynamic bandwidth allocation is based on potential threat level (increased threat level increased bandwidth allocated for a video source). Regarding claim 5, Sarhan discloses wherein determining the allocation comprises determining respective amounts and respective speeds of respective computer processing hardware utilized by the communication network in association with performing the respective surveillance services (see paragraphs [0043] and [0053]-[0055]; e.g., allocation is partly based on power consumption of processing hardware). Regarding claim 6, Sarhan discloses wherein the resources further comprise security monitoring devices that capture and send user activity information about the respective user activities at the incident locations to network equipment of the communication network, and wherein determining the allocation comprises determining communication scheduling parameters that control a communication protocol associated with sending the user activity information by the security monitoring devices to the network equipment (see paragraphs [0021], [0025], [0043] and [0052]; e.g., “bandwidth for a particular video source may be reduced when its remaining battery level is low, especially, if the source is determined not to be very important at that time”). Regarding claim 7, Sarhan discloses wherein the communication scheduling parameters comprise at least one of: a transmission rate parameter, a latency level parameter, and a reliability level parameter (see paragraphs [0028], [0033], [0052] and [0053]; e.g., transmission rate). Regarding claim 8, Sarhan discloses wherein determining the targeted surveillance protocols further comprises determining security monitoring device performance information regarding a number of the security monitoring devices to activate, respective rates of data capture by the security monitoring devices and respective qualities of the data capture by the security monitoring devices, and wherein the controlling comprises controlling respective activations and respective data capture performances of the security monitoring devices based on the security monitoring device performance information (see paragraphs [0020] and [0024]-[0025]; e.g., “For each location X, we need to provide all possible threats to be detected (vehicle detection, face detection, face recognition, motion detection, etc.) and their associated threat levels.”). Regarding claim 9, Sarhan discloses wherein the security monitoring devices comprise respective user equipment associated with respective users at the incident locations (see paragraphs [0020] and [0024]-[0025]; e.g., “For each location X, we need to provide all possible threats to be detected (vehicle detection, face detection, face recognition, motion detection, etc.) and their associated threat levels.”). Regarding claim 10, Sarhan discloses wherein determining the targeted surveillance protocols comprises selecting the respective user equipment based on respective battery levels of the respective user equipment and wherein the controlling comprises: sending respective requests to the respective user equipment requesting capture and provision of the user activity information to the network equipment; and receiving the user activity information by the network equipment in response to the respective requests (see paragraphs [0043] and [0052]; e.g., “bandwidth for a particular video source may be reduced when its remaining battery level is low, especially, if the source is determined not to be very important at that time”). Regarding claim 11, Sarhan discloses wherein the controlling comprises: sending a notification to a user equipment indicating a presence within an incident location of the incident locations based on detection of the user equipment within the incident location (see paragraph [0022]; e.g., real-time alerts). Regarding claim 12, Sarhan discloses wherein the identifying comprises identifying an incident location of the incident locations based on reception of incident report data indicating a user associated with the incident location is executing a risk behavior or exhibiting a risk attribute (see paragraphs [0020] and [0024]-[0025]; e.g., “For each location X, we need to provide all possible threats to be detected (vehicle detection, face detection, face recognition, motion detection, etc.) and their associated threat levels.”). Regarding claim 13, Sarhan discloses wherein the controlling comprises: actively monitoring, via one or more of the respective devices, user activity of the user based on the reception of incident report data; and sending a notification directed to a device associated with the user to indicate, to the user, that the user has been flagged as a person of interest based on the risk behavior or the risk attribute and to indicate, to the user, that the user activity of the user is being actively monitored (see paragraphs [0020] and [0024]-[0025]; e.g., “For each location X, we need to provide all possible threats to be detected (vehicle detection, face detection, face recognition, motion detection, etc.) and their associated threat levels.”…also see paragraph [0022]; e.g., alerts). Regarding claim 14, Sarhan discloses wherein the user comprises a first user, the device comprises a first device, and the notification comprises a first notification, wherein determining the targeted surveillance protocols comprises identifying a second user determined to have a role related to reducing or eliminating the risk behavior or the risk attribute, and wherein the controlling further comprises: sending a second notification directed to the second device, the second notification: identifying the first user, indicating that the first user has been flagged as the person of interest based on the risk behavior or the risk attribute, and instructing the second user to perform an action determined to reduce or eliminate the risk behavior or the risk attribute in accordance with the role (see paragraphs [0020] and [0024]-[0025]; e.g., “For each location X, we need to provide all possible threats to be detected (vehicle detection, face detection, face recognition, motion detection, etc.) and their associated threat levels.”). Claims 16-17 are similar in scope to claims 1-2, respectively, and are therefore rejected under similar rationale. Claims 19-20 are similar in scope to claims 1-2, respectively, and are therefore rejected under similar rationale. Claims 21-22 are similar in scope to claims 12-13, respectively, and are therefore rejected under similar rationale. 5. Claim(s) 15 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sarhan in view of Meredith et al (“Meredith” US 2018/0351809). Regarding claim 15, Sarhan does not expressly disclose wherein the incident locations comprise a virtual location associated with a virtual world. However, Meredith discloses wherein the incident locations comprise a virtual location associated with a virtual world (see paragraph [0022]; e.g., virtual reality). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to include Meredith’s teachings in Sarhan’s user interface in an effort to provide a well-known alternative environment. Claim 23 is similar in scope to claim 23 and is therefore rejected under similar rationale. Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Innes et al (US 2018/0060153). Weast et al (US 2016/0189517). 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RASHAWN N TILLERY/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Aug 31, 2022
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602701
INTERACTIVE MAP INTERFACE INCORPORATING CUSTOMIZABLE GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12547302
PAGE PRESENTATION METHOD, DISPLAY SYSTEM AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12542871
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12536219
DIGITAL CONTAINER FILE FOR MULTIMEDIA PRESENTATION
2y 5m to grant Granted Jan 27, 2026
Patent 12524138
METHOD AND APPARATUS FOR ADJUSTING POSITION OF VIRTUAL BUTTON, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.6%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month