Prosecution Insights
Last updated: April 19, 2026
Application No. 18/792,195

SYSTEMS AND METHODS FOR ACTIVE DENIAL SECURITY SYSTEMS

Non-Final OA §101§102§112
Filed
Aug 01, 2024
Examiner
CHANG, TOM Y
Art Unit
2455
Tech Center
2400 — Computer Networks
Assignee
Myti Inc.
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
74%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
241 granted / 448 resolved
-4.2% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
26 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 448 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to communication received on 08/01/2024. The applicant has submitted 1 claim for examination, all claims are currently pending. The Examiner recommends filing a written authorization for Internet communication in response to the present action. Doing so permits the USPTO to communicate with Applicant using Internet email to schedule interviews or discuss other aspects of the application. Without a written authorization in place, the USPTO cannot respond to Internet correspondence received from Applicant. The preferred method of providing authorization is by filing form PTO/SB/439, available at: https://www.uspto.gov/patent/forms/forms. See MPEP § 502.03 for other methods of providing written authorization. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea comprising analyzing scan data and triggering a deterrent action responsive to determining a prohibited activity has occurred, in the category of certain methods of organizing human activity without significantly more. The claim recites determining whether the scan results includes activities that are prohibited and triggering a deterrent in response, which corresponds to activities performed by human security personnel in reviewing security logs/data/video and judging that a prohibited activity has occurred. This judicial exception is not integrated into a practical application because the claim recites in broad general terms the determining of whether a prohibited activity has occurred. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims do not recite significant limitations defining how such a determination is made. The claim recites a machine learning model to perform the determination but no significant language regard how such a machine learning based determination is performed and amounts to no more than a general linking to machine learning. The claims recite determining to authorize a deterrent action and performing the action with broad general terms that is non-significant. The claim recite additional steps such a receiving a request to authenticate… , generating a scan results…, correspond to data gathering steps that are examples of extrasolution activities. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The claim recites a method comprising a processing unit performing various functional steps. There is not support for such units in the specification. The specification does not describe the processing unit that would allow one of ordinary skill to make or use the invention. Applicant is reminded that under the 112 1st ¶ , an original claim may lack written description support when (1) the claim defines the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved(see MPEP 2163.03(v)). Claim 1 recites a processing unit with functions performed without sufficient disclosure with sufficient structure to support such functions performed. In fact the specification is devoid of the term processing unit. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1 is rejected under 35 U.S.C. 102a1,a2 as being anticipated by Zhou US 2023/0005360 . Regarding claim 1, Zhou teaches a method, comprising: receiving, by a processing unit, a request to authenticate a user of an active denial management system from a computing terminal(on-premise device detects a potential security activity and initiations are quest to cloud system to confirm/analyze(i.e. authenticate) potential security activity, ¶25) [0025] In some embodiments, the method includes detecting, via one or more surveillance sensing devices, a potential security activity involving at least one human body; identifying, via the one or more surveillance sensing devices, audio/video surveillance data of the potential security activity; streaming the audio/video surveillance data of the potential security activity to a cloud-based threat assessment module; performing, at the cloud-based threat assessment module, a threat-severity assessment for the potential security activity based on the audio/video surveillance data, wherein performing the threat-severity assessment for the potential security activity includes: providing, to one or more machine learning models instantiated in the cloud-based threat assessment module, one or more image and/or audio frames underpinning the audio/video surveillance data as input; generating, via the one or more machine learning models, one or more threat-informative inferences based on the one or more image and/or audio frames provided as input; and assigning a threat-severity score to the potential security activity based on the one or more threat-informative inferences; engaging in an automated-conversational dialogue with the at least one human body involved in the potential security activity based on determining that the threat-severity score exists within a pre-determined threat-severity score range; assigning a new threat-severity score to the potential security activity based on the automated-conversational dialogue with the at least one human body; and automatically executing, via the one or more surveillance sensing devices, one or more security actions that mitigate the potential security activity based on the new threat-severity score assigned to the potential security activity. generating, by the processing unit, a scan result of a security facility after the request, wherein the scan result comprises a representation of a set of activities at a location of the security facility(streaming of audio/video (i.e.scanning) of the residence/non-residence facility to capture the potential security activities and send such captured data to cloud system for analysis, ¶s2529) [0025] In some embodiments, the method includes detecting, via one or more surveillance sensing devices, a potential security activity involving at least one human body; identifying, via the one or more surveillance sensing devices, audio/video surveillance data of the potential security activity; streaming the audio/video surveillance data of the potential security activity to a cloud-based threat assessment module; performing, at the cloud-based threat assessment module, a threat-severity assessment for the potential security activity based on the audio/video surveillance data, wherein performing the threat-severity assessment for the potential security activity includes: providing, to one or more machine learning models instantiated in the cloud-based threat assessment module, one or more image and/or audio frames underpinning the audio/video surveillance data as input; generating, via the one or more machine learning models, one or more threat-informative inferences based on the one or more image and/or audio frames provided as input; and assigning a threat-severity score to the potential security activity based on the one or more threat-informative inferences; engaging in an automated-conversational dialogue with the at least one human body involved in the potential security activity based on determining that the threat-severity score exists within a pre-determined threat-severity score range; assigning a new threat-severity score to the potential security activity based on the automated-conversational dialogue with the at least one human body; and automatically executing, via the one or more surveillance sensing devices, one or more security actions that mitigate the potential security activity based on the new threat-severity score assigned to the potential security activity. [0029] The one or more surveillance sensing devices 110 of the system 100 may be on-premise security devices that passively or actively surveil one or more pre-determined locations or areas, such as an entrance of a dwelling, a side entrance of the dwelling, a back entrance of the dwelling, and/or the like. The one or more surveillance sensing devices no may also be installed at non-residence properties or structures, such as a business office, a storage facility, or the like. applying, by the processing unit, a machine learning model to the scan result such that the machine learning model determines whether the scan result includes at least one activity of the set of activities that is prohibited by a security policy set before the request is received(machine learning model used to analyze capture data and conclude that security activity has occurred, ¶s25,46 ) [0025] In some embodiments, the method includes detecting, via one or more surveillance sensing devices, a potential security activity involving at least one human body; identifying, via the one or more surveillance sensing devices, audio/video surveillance data of the potential security activity; streaming the audio/video surveillance data of the potential security activity to a cloud-based threat assessment module; performing, at the cloud-based threat assessment module, a threat-severity assessment for the potential security activity based on the audio/video surveillance data, wherein performing the threat-severity assessment for the potential security activity includes: providing, to one or more machine learning models instantiated in the cloud-based threat assessment module, one or more image and/or audio frames underpinning the audio/video surveillance data as input; generating, via the one or more machine learning models, one or more threat-informative inferences based on the one or more image and/or audio frames provided as input; and assigning a threat-severity score to the potential security activity based on the one or more threat-informative inferences; engaging in an automated-conversational dialogue with the at least one human body involved in the potential security activity based on determining that the threat-severity score exists within a pre-determined threat-severity score range; assigning a new threat-severity score to the potential security activity based on the automated-conversational dialogue with the at least one human body; and automatically executing, via the one or more surveillance sensing devices, one or more security actions that mitigate the potential security activity based on the new threat-severity score assigned to the potential security activity. [0046] The threat-severity score produced by the severity-aware machine learning model may be scaled between 0-100, wherein a threat-severity score of 0 indicates a 0% probability that activity identified in the AV surveillance data contains malicious activity and a threat-severity score of 100 indicates a 100% probability that the activity identified in the AV surveillance data contains malicious activity. performing, by the processing unit, based on the security policy and the at least one activity, an authorization of an activation of a deterrent control system that is expected to deter the at least one activity(based on security rules a responsive action is triggered such a triggering a loud security tone , turning on lights etc, ¶s48,85) [0085] Additionally, or alternatively, in a second implementation, S230 may function to estimate the severity of activity identified in the surveillance data via one or more heuristics/rules. For instance, in some such implementations, S230 may function to automatically estimate that the activity identified in the surveillance data contains malicious activity if S230 determines, via the one or more computed threat-informative inferences, that the surveillance data includes one or more weapons, includes one or more “un-welcomed” individuals, includes an acoustic threat (e.g., gunshot), includes an atypical condition/scenario (e.g., a fire), includes an unrecognized person, and/or includes a person listed on a public safety registry. That is, in this second implementation, S230 may function to estimate a threat-severity of a subject activity based on one or more features extracted from the activity scene and/or threat-informative inferences satisfying threat-severity logic (e.g., logic-1: if Weapon detected then increase threat severity estimate, etc.), threat-severity thresholds (e.g., human body in a threatening manner detected beyond a maximum period, etc.), and/or threat-severity rules (e.g., weapon+unidentified person=increased threat severity, etc.). [0048] In some embodiments, the security control/mitigation instructions that may be transmitted to the one or more surveillance sensing device(s) 110 may include, but may not be limited to, instructions for playing/displaying a specified warning message (e.g., “Police will be notified if you do not leave the property in the next 30 seconds”), instructions for adjusting the pan, tilt, and/or zoom (PTZ) of the surveillance sensing device, instructions for playing a (e.g., loud) security alarm tone, instructions for notifying a pre-defined security team, instructions for calling the subscriber, instructions to ignore the potential security activity, playing a crime deterrent noise/sound (e.g., dog barking sound), activating a particular function of the surveillance sensing device (e.g., turning a flood light on, intermittent bursts of flashing (e.g., red) lights) and/or the like. generating, by the processing unit, the authorization of the activation of the deterrent control system for the user; taking, by the processing unit, an action based on the authorization(security system trigger responsive action to be performed such a triggering an alarm, turning on lights, warning that police have been alerted, ¶48). [0048] In some embodiments, the security control/mitigation instructions that may be transmitted to the one or more surveillance sensing device(s) 110 may include, but may not be limited to, instructions for playing/displaying a specified warning message (e.g., “Police will be notified if you do not leave the property in the next 30 seconds”), instructions for adjusting the pan, tilt, and/or zoom (PTZ) of the surveillance sensing device, instructions for playing a (e.g., loud) security alarm tone, instructions for notifying a pre-defined security team, instructions for calling the subscriber, instructions to ignore the potential security activity, playing a crime deterrent noise/sound (e.g., dog barking sound), activating a particular function of the surveillance sensing device (e.g., turning a flood light on, intermittent bursts of flashing (e.g., red) lights) and/or the like. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tom Y. Chang whose telephone number is 571-270-5938. The examiner can normally be reached on Monday-Friday from 9am to 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Trost , can be reached on (571)272-7872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /TOM Y CHANG/ Primary Examiner, Art Unit 2442
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547828
TRAFFIC-BASED GPU LOAD ROUTING WITHIN LLM CLUSTERS
2y 5m to grant Granted Feb 10, 2026
Patent 12542838
METHODS, DEVICES, AND SYSTEMS FOR DETERMINING A SUBSET FOR AUTONOMOUS SHARING OF DIGITAL MEDIA
2y 5m to grant Granted Feb 03, 2026
Patent 12536243
SYSTEM AND METHOD FOR URL FETCHING RETRY MECHANISM
2y 5m to grant Granted Jan 27, 2026
Patent 12524490
SYSTEM AND METHOD FOR URL FETCHING RETRY MECHANISM
2y 5m to grant Granted Jan 13, 2026
Patent 12524491
SYSTEM AND METHOD FOR URL FETCHING RETRY MECHANISM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
74%
With Interview (+20.1%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 448 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month