Prosecution Insights
Last updated: April 19, 2026
Application No. 18/593,129

MONITORING A USER'S ENVIRONMENT TO IDENTIFY POTENTIAL THREATS TO THE USER

Non-Final OA §103
Filed
Mar 01, 2024
Examiner
SILVA-AVINA, EMMANUEL
Art Unit
2673
Tech Center
2600 — Communications
Assignee
T-Mobile Usa Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
54 granted / 66 resolved
+19.8% vs TC avg
Minimal +5% lift
Without
With
+4.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
17 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
55.4%
+15.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 66 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to the Application No. 18/593, 129 filed 03/01/2024. Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 03/01/2024 has been entered and considered. Initialed copies of the PTO-1449 by the examiner are attached. Drawings The drawings are objected to because the quality of Figure 4 is degraded and illegible to understand. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim(s) 8 and 14 are objected to because of the following informalities: Claims 8 and 14 should recite the phrase(s) “user equipment” before the abbreviation “UE” in order to avoid clarity issues and/or 112b issues. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 8-12, 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai et al. (US 20240071081 A1) in view of Flick et al. (US 20220004775 A1) and in further view of Millican et al. (US 20220139204 A1). Regarding claim 1, Palamadai discloses a non-transitory, computer-readable storage medium comprising instructions recorded thereon, wherein the instructions, when executed by at least one data processor of a system, cause the system to (non-transitory machine-readable medium and processor, Palamadai [0031]): obtain streaming information associated with an environment surrounding a mobile device (“a surveillance system associated with a communication network that monitors user activity and behavior in real-world environments based on user activity information captured and reported to the surveillance system via the communication network by various devices and systems connected to the communication network... In various embodiments, the devices and/or systems can include security monitoring devices deployed throughout the real-world environments that capture image data (e.g., video and/or still images), audio data, motion data, and other sensory data regarding user behavior and activity associated with the environments” Palamadai, [0015]; “ The devices can include various physical communication devices distributed throughout a real-world environment 102 capable of being monitored by the system 100, including network equipment (NE 104), user equipment (UE 106) and IoT devices 108.” Palamadai, [0033]; I.e., UE includes that of a mobile device as described in [0038]) operating on a 5G wireless telecommunication network (“The communication network 110 can comprise but is not limited to, one or more wired and wireless networks, including, but not limited to, a cellular or mobile network... Fifth Generation (5G) networks” Palamadai, [0044], [0045]), wherein the streaming information includes an audio recording of the environment and a video recording of the environment (“In various embodiments, the devices and/or systems can include security monitoring devices deployed throughout the real-world environments that capture image data (e.g., video and/or still images), audio data, motion data, and other sensory data regarding user behavior and activity associated with the environments” Palamadai, [0015]), wherein the streaming information is delivered over the 5G wireless telecommunication network (“the communication network 110 can be associated with a single network provider, multiple network providers, and/or encompass a variety of different type of wired and wireless communication technologies (e.g., 3GGP, WiFi, LTE, satellite, 5G, etc.)” Palamadai, [0045]); based on the streaming information associated with the environment, identify a person in the environment and an object in the environment (“The user activity information can include any information that identifies, describes or indicates information regarding the identity, appearance, behavior and/or activity of a person or group of people relative to an environment or location.” Palamadai, [0015]; “any IoT device comprising one or more sensors that can capture information about their environment or location, including information pertaining people, objects and conditions associated with their respective environment.” Palamadai, [0042]; additionally, see Palamadai, [0065] “For example, video data capture of a person or group of people in the real-world environment 102 can be analyzed using 2D and/or 3D image analysis to and pattern recognition to correlate physical movements of the person and/or people to defined physical behaviors and/or actions and activities, combined with object recognition to identify and characterize objects and other entities (e.g., other people, buildings, etc.) in the environment.”); monitor multiple attributes associated with the environment including a speed of movement associated with a user (“The wearable devices can also provide user activity/context information 101 related to the user's physical movement patterns, gestures, behaviors and physical activities based on data capture via wearable movement/motion sensors including fine-tuned accelerometers and gyroscopes combined with pattern. For example, the wearable devices can capture motion identifying acceleration, rotation/orientation, and/or velocity of the wearable device itself, facilitating determination of motion and movement data of the body and/or body parts to which the motion sensors are attached.” Palamadai, [0069]), based on the multiple attributes associated with the environment, determine whether an anomaly is occurring in the environment (“In some embodiments, the monitoring component 204 can be configured to monitor the input data 105 as it is received in real-time in association with detecting defined events or conditions in the input data 105 that satisfy one or more defined incident risk criteria (e.g., defined in the incident assessment data 220) that amount to an incident or a potential incident... The incident risk criteria can also relate to detected presence of specific users or any user at or near an incident location. The incident risk criteria can also relate to one or more health status factors considered to be non-conforming or potentially non-conforming to acceptable or safe health states for a particular location or environment and/or individual. The incident risk criteria can also relate to human behavior or activity that may be considered to impose a risk of injury or harm to oneself or others, including physical and/or mental injury to individuals and/or property.” Palamadai, [0072]); and upon determining that the anomaly is a threat to the user, notify the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113], wherein the incidents involve anomalies including, for example, “An incident can also refer to any human behavior or activity that may be considered to impose a risk of injury or harm to oneself or others, including physical and/or mental injury to individuals and/or property.” Palamadai, [0073]). Palamadai discloses all of the subject matter as described above except for specifically teaching monitor multiple attributes associated with the environment including speed of movement associated the person in the environment, and a speed of movement associated with the object in the environment. However, Flick in the same field of endeavor teaches monitor multiple attributes associated with the environment including speed of movement associated the person in the environment, and a speed of movement associated with the object in the environment (“Additionally and/or alternatively, identities, and in particular object types could be identified through an analysis of movement. For example movement of [automated guided vehicles] AGVs will typically tend to follow predetermined patterns and/or have typically characteristics such as constant speed and/or direction changes. In contrast to this movement of individuals will tend to be more haphazard and subject to changes in direction and/or speed allowing, AGVs and humans to be distinguished based on an analysis of movement patterns.” Flick, [0140]; I.e., objects are monitored based on their speed and such objects can be a person in an environment and an object in an environment such as a vehicle). Therefore, it would have been obvious to one of ordinary skill in the art to combine Palamadai and Flick before the effective filing date of the claimed invention. The motivation for this combination of references would have been to monitor environments to prevent hazardous situations, for example to alert individuals by generating a notification or an audible and/or visible alert (Flick, [0092]). This motivation for the combination of Palamadai and Flick is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). The combination of Palamadai and Flick as a whole do not expressly disclose wherein the anomaly includes a change in the speed of movement associated with the user, change in the speed of movement associated with the person in the environment, or a change in the speed of movement associated with the object in the environment. However, Millican in the same field of endeavor teaches wherein the anomaly includes a change in the speed of movement associated with the user (“A sudden, abrupt or violent change in velocity of the user while performing activities having a stipulated range of speed may be indicative of an adverse situation such as a physical attack, a car crash or other types of accidents including traffic and non-traffic accidents... For example, if the activities being undertaken by the user are within a range of speeds or have a range of changes in speeds, a speed or a speed change exceeding a safety threshold would be classified as an abrupt or violent change.” Millican, [0136]). Therefore, it would have been obvious to one of ordinary skill in the art to combine Palamadai, Flick and Millican before the effective filing date of the claimed invention. The motivation for this combination of references would have been to automatically generate a distress report to an external destination upon detection of a safety-threatening adverse situation (Millican, [0120]). This motivation for the combination of Palamadai, Flick and Millican is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Regarding claim 2, Palamadai, Flick and Millican disclose the non-transitory, computer-readable storage medium of claim 1, comprising instructions to: obtain a location associated with the mobile device, and a crime report associated with the location (“In one or more embodiments, the surveillance system can identify incident locations or potential incident location based on analysis of received user activity data reported via respective security monitoring devices and defined information (e.g., rules, algorithms, models, etc.) correlating certain user behaviors, activities and/or attributes to incidents. Additionally, or alternatively, the surveillance system can identify incident locations and/or potential incident locations based on user provided incident reports... The incident types or categories can include a wide range of different types of incidents depending on the types of incidents the surveillance system is adapted to monitor and evaluate, including various types of criminal activity and non-criminal activity” Palamadai, [0017]); obtain, from the crime report, demographic information associated with a victim; obtain demographic information associated with the user; determine whether the demographic information associated with the user matches the demographic information associated with the victim (“The surveillance system can further evaluate and characterize incident locations and/or corresponding incidents associated with the locations based on respective types of the incidents (i.e., the particular user activity of behavior corresponding to the incident) and various other factors that influence how to respond to the incidents in an optimal manner. For example, the various other factors can relate to (but are not limited to), the location of the incident, context of the incident, the individual or individuals involved (e.g., identities, known profiles and history of the individual/individuals, number of individuals, user demographics, etc.), the probability of escalation or occurrence of the incident, the amount and validity of received incident reports about the incident, and known and forecasted risk associated with the incident.” Palamadai, [0021]); and upon determining that the demographic information associated with the user matches the demographic information associated with the victim (“the surveillance system can employ ML and AI to determine the or facilitated determining the targeted surveillance protocols based on relevant information about the respective incidents, including the incident type, severity, the location, context of the individual, the individual or individuals involved (e.g., identities, known profiles and history of the individual/individuals, number of individuals, user demographics, etc.), the probability of escalation or occurrence of the incident, the amount and validity of received incident reports about the incident, and known and forecasted risks associated with the incident.” Palamadai, [0022]), notify the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113]). Regarding claim 3, Palamadai, Flick and Millican disclose the non-transitory, computer-readable storage medium of claim 1, comprising instructions to: obtain information associated with the user of the mobile device including age, gender, and a medical condition associated with the user; and upon determining the anomaly is occurring in the environment, determine whether the anomaly is the threat to the user based on the user’s age, gender, and the medical condition (“The incident risk criteria can also relate to one or more behaviors, activities and/or appearances of specific users or user profiles (or avatars) considered to be non-conforming or potentially non-conforming to acceptable user behavior or activity for a particular location or environment (e.g., based on user age, gender and other demographics, based on user history, based on user roles relative to a location/environment, based on user permission relative to a location/environment, etc.)” Palamadai, [0072]; “In another example, the monitoring component 204 can be configured to identify incidents or potential incidents based on a user health parameter received in the user activity/context data 101 satisfying an incident risk criterion (e.g., heart rate exceeding a threshold, blood pressure, temperature too high, etc.)” Palamadai, [0074]). Regarding claim 4, Palamadai, Flick and Millican disclose the non-transitory, computer-readable storage medium of claim 1, comprising instructions to: obtain a history of video recordings of the environment; determine whether a significant difference exists between the history of video recordings and the video recording, wherein the significant difference includes a weapon in the video recording, or an erratic person in the video recording; and upon determining that the significant difference exists between the history of video recordings and the video recording (“For example, in some embodiments, the surveillance system 120 can characterize some locations as historical incident locations based on known associations of the locations with incidents or a high risk of incidents in the past. For instance, certain environments may be known to have a higher rate or risk/rate of incidents based on known risks associated with the locations (e.g., attributed to the natural landscape, weather, activities performed at the locations, etc.) historical user activity at the locations, such as certain neighborhoods, buildings, event locations, crowded areas (e.g., airports, stadiums), certain natural environments (e.g., dangerous hiking trails, rapid waters, etc.) and so on. In some implementations of these embodiments, a triggering event or condition can include reception of any input data 105 and/or reception of specific events or conditions in the input data for a historical incident location. Information identifying and describing historical incident locations can be included in the incident assessment data 220.” Palamadai, [0078]; Additionally, see Palamadai [0085] “As noted above, incident criteria can relate to, but is not limited to: the behavior, activity and/or appearance (e.g., clothing/attire, objects held/carried, physical appearance, etc.) of a user in solitude; the behavior, activity and/or appearance of a group of users; identities and attributes of the individual or individual involved (e.g., known profiles and history of the individual/individuals, number of individuals, user demographics, user role relative to the environment, user permissions relative to the environment, etc.); other people (not involved in the incident) included in the environment or location (e.g., number of other people and identities and attributes of the other people); and other contextual factors associated with the incident and/or environment (e.g., regarding environmental conditions associated with the environment, objects in the environment, relative positions of the objects, time of day/year, events, weather, etc.)”), notify the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113]). Regarding claim 5, Palamadai, Flick and Millican disclose the non-transitory, computer-readable storage medium of claim 1, comprising instructions to: obtain a location associated with the mobile device (“the respective security monitoring devices can be associated with IUDs and location information that identifies their respective locations (e.g., fixed or mobile) where their corresponding data was captured” Palamadai, [0066]); determine whether a weather warning associated with the location exists (“The computing systems can also include various other systems 114 accessible via the communication network 110 that can be used by the surveillance system 120 to gather relevant information related to monitoring, evaluating and responding to incidents, to communicate information with regarding detected incidents and potential incidents, and/or to interface with in association with performing responses to incidents (e.g., background checking systems, emergency services systems, health information systems, weather systems” Palamadai, [0034]); determine whether the audio recording indicates a sound of distress associated with the user (“Audio recordings captured in an environment can also be processed using automated audio recognition and interpretation technologies to determine words spoken (e.g., using voice-to-text and natural language processing (NLP)), and other attributes (e.g., tone of voice, volume, pace, slurring of words, etc.). Audio recordings comprising a totality of sounds from an environment can also be processed using automated audio recognition to characterize the context of the environment (e.g., number of people in the environment, type of activity occurring in the environment, etc.)” Palamadai, [0065]); upon determining that the weather warning exists and that the audio recording indicates the sound of distress associated with the user (“incident analysis component 302 can provide for determining relevant attributes associated with an incident or potential incidents based on the monitored input data 105 that can facilitate identifying incidents or potential incidents by the incident identification component 310 and determining optimal responses for responding to the incidents... other contextual factors associated with the incident and/or environment (e.g., regarding environmental conditions associated with the environment, objects in the environment, relative positions of the objects, time of day/year, events, weather, etc.)” Palamadai, [0083]), obtain a frequent contact associated with the mobile device; and notify the frequent contact of the threat to the user (“When the platform is in receipt of signals from the apparatus 100 signifying occurrence of an emergency situation, the platform may react or counteract by reporting to the police or other responsible authorities and/or to contact with a person or persons pre-registered by the account user. For example, the person may be a next-of-kin or a family member.” Millican, [0077]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 6, Palamadai, Flick and Millican disclose the non-transitory, computer-readable storage medium of claim 1, comprising instructions to: upon determining that the anomaly is the threat to the user, obtain an emergency contact associated with the user (“the user experiencing a sudden fall, or accident involving violent movement; the scene monitor 110 experiencing/sensing recognizably threatening images, such as a fire, a weapon, a known felon; the scene monitor 110 experiencing/sensing audible warnings in the environment, for example, a gunshot, a scream, a siren or key words spoken to trigger and emergency event” Millican, [0082]) ; and notify the emergency contact associated with the user of the threat to the user (“Any friends, family or emergency services (emergency contacts) selected by the user are then notified of an event” Millican, [0092]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 8, Palamadai, Flick and Millican disclose a method comprising: obtaining streaming information associated with an environment surrounding a UE (“a surveillance system associated with a communication network that monitors user activity and behavior in real-world environments based on user activity information captured and reported to the surveillance system via the communication network by various devices and systems connected to the communication network... In various embodiments, the devices and/or systems can include security monitoring devices deployed throughout the real-world environments that capture image data (e.g., video and/or still images), audio data, motion data, and other sensory data regarding user behavior and activity associated with the environments” Palamadai, [0015]; “ The devices can include various physical communication devices distributed throughout a real-world environment 102 capable of being monitored by the system 100, including network equipment (NE 104), user equipment (UE 106) and IoT devices 108.” Palamadai, [0033]; I.e., UE includes that of a mobile device as described in [0038]) operating on a wireless telecommunication network (“The communication network 110 can comprise but is not limited to, one or more wired and wireless networks, including, but not limited to, a cellular or mobile network... Fifth Generation (5G) networks” Palamadai, [0044], [0045]), wherein the streaming information includes an audio recording of the environment and a video recording of the environment (“In various embodiments, the devices and/or systems can include security monitoring devices deployed throughout the real-world environments that capture image data (e.g., video and/or still images), audio data, motion data, and other sensory data regarding user behavior and activity associated with the environments” Palamadai, [0015]), wherein the streaming information is delivered over the wireless telecommunication network (“the communication network 110 can be associated with a single network provider, multiple network providers, and/or encompass a variety of different type of wired and wireless communication technologies (e.g., 3GGP, WiFi, LTE, satellite, 5G, etc.)” Palamadai, [0045]); based on the streaming information associated with the environment, identifying a participant in the environment or an object in the environment (“The user activity information can include any information that identifies, describes or indicates information regarding the identity, appearance, behavior and/or activity of a person or group of people relative to an environment or location.” Palamadai, [0015]; “any IoT device comprising one or more sensors that can capture information about their environment or location, including information pertaining people, objects and conditions associated with their respective environment.” Palamadai, [0042]; additionally, see Palamadai, [0065] “For example, video data capture of a person or group of people in the real-world environment 102 can be analyzed using 2D and/or 3D image analysis to and pattern recognition to correlate physical movements of the person and/or people to defined physical behaviors and/or actions and activities, combined with object recognition to identify and characterize objects and other entities (e.g., other people, buildings, etc.) in the environment.”); monitoring an attribute associated with the environment including at least one of: a speed of movement associated with a user (“The wearable devices can also provide user activity/context information 101 related to the user's physical movement patterns, gestures, behaviors and physical activities based on data capture via wearable movement/motion sensors including fine-tuned accelerometers and gyroscopes combined with pattern. For example, the wearable devices can capture motion identifying acceleration, rotation/orientation, and/or velocity of the wearable device itself, facilitating determination of motion and movement data of the body and/or body parts to which the motion sensors are attached.” Palamadai, [0069]), a speed of movement associated with the participant in the environment, and a speed of movement associated with the object in the environment (“Additionally and/or alternatively, identities, and in particular object types could be identified through an analysis of movement. For example movement of [automated guided vehicles] AGVs will typically tend to follow predetermined patterns and/or have typically characteristics such as constant speed and/or direction changes. In contrast to this movement of individuals will tend to be more haphazard and subject to changes in direction and/or speed allowing, AGVs and humans to be distinguished based on an analysis of movement patterns.” Flick, [0140]; I.e., objects are monitored based on their speed and such objects can be a person in an environment and an object such as a vehicle); based on the attribute associated with the environment, determining whether an anomaly is occurring in the environment (“In some embodiments, the monitoring component 204 can be configured to monitor the input data 105 as it is received in real-time in association with detecting defined events or conditions in the input data 105 that satisfy one or more defined incident risk criteria (e.g., defined in the incident assessment data 220) that amount to an incident or a potential incident... The incident risk criteria can also relate to detected presence of specific users or any user at or near an incident location. The incident risk criteria can also relate to one or more health status factors considered to be non-conforming or potentially non-conforming to acceptable or safe health states for a particular location or environment and/or individual. The incident risk criteria can also relate to human behavior or activity that may be considered to impose a risk of injury or harm to oneself or others, including physical and/or mental injury to individuals and/or property.” Palamadai, [0072]), wherein the anomaly includes a change in the speed of movement associated with the user (“A sudden, abrupt or violent change in velocity of the user while performing activities having a stipulated range of speed may be indicative of an adverse situation such as a physical attack, a car crash or other types of accidents including traffic and non-traffic accidents... For example, if the activities being undertaken by the user are within a range of speeds or have a range of changes in speeds, a speed or a speed change exceeding a safety threshold would be classified as an abrupt or violent change.” Millican, [0136]), a change in the speed of movement associated with the participant in the environment, or a change in the speed of movement associated with the object in the environment; and upon determining that the anomaly is a threat to the user, notifying the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113], wherein the incidents involve anomalies including, for example, “An incident can also refer to any human behavior or activity that may be considered to impose a risk of injury or harm to oneself or others, including physical and/or mental injury to individuals and/or property.” Palamadai, [0073]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 9, Palamadai, Flick and Millican disclose the method of claim 8, comprising: obtaining information associated with the user of the UE including age, gender, and a medical condition associated with the user; and upon determining the anomaly is occurring in the environment, determining whether the anomaly is the threat to the user based on the user’s age, gender, and the medical condition (“The incident risk criteria can also relate to one or more behaviors, activities and/or appearances of specific users or user profiles (or avatars) considered to be non-conforming or potentially non-conforming to acceptable user behavior or activity for a particular location or environment (e.g., based on user age, gender and other demographics, based on user history, based on user roles relative to a location/environment, based on user permission relative to a location/environment, etc.)” Palamadai, [0072]; “In another example, the monitoring component 204 can be configured to identify incidents or potential incidents based on a user health parameter received in the user activity/context data 101 satisfying an incident risk criterion (e.g., heart rate exceeding a threshold, blood pressure, temperature too high, etc.)” Palamadai, [0074]). Regarding claim 10, Palamadai, Flick and Millican disclose the method of claim 8, comprising: obtaining a history of video recordings of the environment; determining whether a significant difference exists between the history of video recordings and the video recording, wherein the significant difference includes a weapon in the video recording, or an erratic person in the video recording; and upon determining that the significant difference exists between the history of video recordings and the video recording (“For example, in some embodiments, the surveillance system 120 can characterize some locations as historical incident locations based on known associations of the locations with incidents or a high risk of incidents in the past. For instance, certain environments may be known to have a higher rate or risk/rate of incidents based on known risks associated with the locations (e.g., attributed to the natural landscape, weather, activities performed at the locations, etc.) historical user activity at the locations, such as certain neighborhoods, buildings, event locations, crowded areas (e.g., airports, stadiums), certain natural environments (e.g., dangerous hiking trails, rapid waters, etc.) and so on. In some implementations of these embodiments, a triggering event or condition can include reception of any input data 105 and/or reception of specific events or conditions in the input data for a historical incident location. Information identifying and describing historical incident locations can be included in the incident assessment data 220.” Palamadai, [0078]; Additionally, see Palamadai [0085] “As noted above, incident criteria can relate to, but is not limited to: the behavior, activity and/or appearance (e.g., clothing/attire, objects held/carried, physical appearance, etc.) of a user in solitude; the behavior, activity and/or appearance of a group of users; identities and attributes of the individual or individual involved (e.g., known profiles and history of the individual/individuals, number of individuals, user demographics, user role relative to the environment, user permissions relative to the environment, etc.); other people (not involved in the incident) included in the environment or location (e.g., number of other people and identities and attributes of the other people); and other contextual factors associated with the incident and/or environment (e.g., regarding environmental conditions associated with the environment, objects in the environment, relative positions of the objects, time of day/year, events, weather, etc.)”), notifying the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113]). Regarding claim 11, Palamadai, Flick and Millican disclose the method of claim 8, comprising: obtaining a location associated with the UE, and a crime report associated with the location (“In one or more embodiments, the surveillance system can identify incident locations or potential incident location based on analysis of received user activity data reported via respective security monitoring devices and defined information (e.g., rules, algorithms, models, etc.) correlating certain user behaviors, activities and/or attributes to incidents. Additionally, or alternatively, the surveillance system can identify incident locations and/or potential incident locations based on user provided incident reports... The incident types or categories can include a wide range of different types of incidents depending on the types of incidents the surveillance system is adapted to monitor and evaluate, including various types of criminal activity and non-criminal activity” Palamadai, [0017]); obtaining, from the crime report, demographic information associated with a victim; obtaining demographic information associated with the user; determining whether the demographic information associated with the user matches the demographic information associated with the victim (“The surveillance system can further evaluate and characterize incident locations and/or corresponding incidents associated with the locations based on respective types of the incidents (i.e., the particular user activity of behavior corresponding to the incident) and various other factors that influence how to respond to the incidents in an optimal manner. For example, the various other factors can relate to (but are not limited to), the location of the incident, context of the incident, the individual or individuals involved (e.g., identities, known profiles and history of the individual/individuals, number of individuals, user demographics, etc.), the probability of escalation or occurrence of the incident, the amount and validity of received incident reports about the incident, and known and forecasted risk associated with the incident.” Palamadai, [0021]); and upon determining that the demographic information associated with the user matches the demographic information associated with the victim (“the surveillance system can employ ML and AI to determine the or facilitated determining the targeted surveillance protocols based on relevant information about the respective incidents, including the incident type, severity, the location, context of the individual, the individual or individuals involved (e.g., identities, known profiles and history of the individual/individuals, number of individuals, user demographics, etc.), the probability of escalation or occurrence of the incident, the amount and validity of received incident reports about the incident, and known and forecasted risks associated with the incident.” Palamadai, [0022]), notifying the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113]). Regarding claim 12, Palamadai, Flick and Millican disclose the method of claim 8, comprising: obtaining a location associated with the UE (“the respective security monitoring devices can be associated with IUDs and location information that identifies their respective locations (e.g., fixed or mobile) where their corresponding data was captured” Palamadai, [0066]); determining whether a weather warning associated with the location exists (“The computing systems can also include various other systems 114 accessible via the communication network 110 that can be used by the surveillance system 120 to gather relevant information related to monitoring, evaluating and responding to incidents, to communicate information with regarding detected incidents and potential incidents, and/or to interface with in association with performing responses to incidents (e.g., background checking systems, emergency services systems, health information systems, weather systems” Palamadai, [0034]); determining whether the audio recording indicates a sound of distress associated with the user (“Audio recordings captured in an environment can also be processed using automated audio recognition and interpretation technologies to determine words spoken (e.g., using voice-to-text and natural language processing (NLP)), and other attributes (e.g., tone of voice, volume, pace, slurring of words, etc.). Audio recordings comprising a totality of sounds from an environment can also be processed using automated audio recognition to characterize the context of the environment (e.g., number of people in the environment, type of activity occurring in the environment, etc.)” Palamadai, [0065]); upon determining that the weather warning exists and that the audio recording indicates the sound of distress associated with the user (“incident analysis component 302 can provide for determining relevant attributes associated with an incident or potential incidents based on the monitored input data 105 that can facilitate identifying incidents or potential incidents by the incident identification component 310 and determining optimal responses for responding to the incidents... other contextual factors associated with the incident and/or environment (e.g., regarding environmental conditions associated with the environment, objects in the environment, relative positions of the objects, time of day/year, events, weather, etc.)” Palamadai, [0083]), obtaining a frequent contact associated with the UE; and notifying the frequent contact of the threat to the user (“When the platform is in receipt of signals from the apparatus 100 signifying occurrence of an emergency situation, the platform may react or counteract by reporting to the police or other responsible authorities and/or to contact with a person or persons pre-registered by the account user. For example, the person may be a next-of-kin or a family member.” Millican, [0077]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 14, Palamadai, Flick and Millican disclose a system comprising: at least one hardware processor; and at least one non-transitory memory storing instructions, which, when executed by the at least one hardware processor, cause the system to (non-transitory machine-readable medium and processor, Palamadai [0031]): obtain streaming information associated with an environment surrounding a UE (“a surveillance system associated with a communication network that monitors user activity and behavior in real-world environments based on user activity information captured and reported to the surveillance system via the communication network by various devices and systems connected to the communication network... In various embodiments, the devices and/or systems can include security monitoring devices deployed throughout the real-world environments that capture image data (e.g., video and/or still images), audio data, motion data, and other sensory data regarding user behavior and activity associated with the environments” Palamadai, [0015]; “ The devices can include various physical communication devices distributed throughout a real-world environment 102 capable of being monitored by the system 100, including network equipment (NE 104), user equipment (UE 106) and IoT devices 108.” Palamadai, [0033]; I.e., UE includes that of a mobile device as described in [0038]) operating on a wireless telecommunication network (“The communication network 110 can comprise but is not limited to, one or more wired and wireless networks, including, but not limited to, a cellular or mobile network... Fifth Generation (5G) networks” Palamadai, [0044], [0045]), wherein the streaming information includes an audio recording of the environment and a video recording of the environment (“In various embodiments, the devices and/or systems can include security monitoring devices deployed throughout the real-world environments that capture image data (e.g., video and/or still images), audio data, motion data, and other sensory data regarding user behavior and activity associated with the environments” Palamadai, [0015]), wherein the streaming information is delivered over the wireless telecommunication network (“the communication network 110 can be associated with a single network provider, multiple network providers, and/or encompass a variety of different type of wired and wireless communication technologies (e.g., 3GGP, WiFi, LTE, satellite, 5G, etc.)” Palamadai, [0045]); based on the streaming information associated with the environment, identify a participant in the environment or an object in the environment (“The user activity information can include any information that identifies, describes or indicates information regarding the identity, appearance, behavior and/or activity of a person or group of people relative to an environment or location.” Palamadai, [0015]; “any IoT device comprising one or more sensors that can capture information about their environment or location, including information pertaining people, objects and conditions associated with their respective environment.” Palamadai, [0042]; additionally, see Palamadai, [0065] “For example, video data capture of a person or group of people in the real-world environment 102 can be analyzed using 2D and/or 3D image analysis to and pattern recognition to correlate physical movements of the person and/or people to defined physical behaviors and/or actions and activities, combined with object recognition to identify and characterize objects and other entities (e.g., other people, buildings, etc.) in the environment.”); monitor an attribute associated with the environment including at least one of: a speed of movement associated with a user (“The wearable devices can also provide user activity/context information 101 related to the user's physical movement patterns, gestures, behaviors and physical activities based on data capture via wearable movement/motion sensors including fine-tuned accelerometers and gyroscopes combined with pattern. For example, the wearable devices can capture motion identifying acceleration, rotation/orientation, and/or velocity of the wearable device itself, facilitating determination of motion and movement data of the body and/or body parts to which the motion sensors are attached.” Palamadai, [0069]), a speed of movement associated with the participant in the environment, and a speed of movement associated with the object in the environment (“Additionally and/or alternatively, identities, and in particular object types could be identified through an analysis of movement. For example movement of [automated guided vehicles] AGVs will typically tend to follow predetermined patterns and/or have typically characteristics such as constant speed and/or direction changes. In contrast to this movement of individuals will tend to be more haphazard and subject to changes in direction and/or speed allowing, AGVs and humans to be distinguished based on an analysis of movement patterns.” Flick, [0140]; I.e., objects are monitored based on their speed and such objects can be a person in an environment and an object such as a vehicle); based on the attribute associated with the environment, determine whether an anomaly is occurring in the environment (“In some embodiments, the monitoring component 204 can be configured to monitor the input data 105 as it is received in real-time in association with detecting defined events or conditions in the input data 105 that satisfy one or more defined incident risk criteria (e.g., defined in the incident assessment data 220) that amount to an incident or a potential incident... The incident risk criteria can also relate to detected presence of specific users or any user at or near an incident location. The incident risk criteria can also relate to one or more health status factors considered to be non-conforming or potentially non-conforming to acceptable or safe health states for a particular location or environment and/or individual. The incident risk criteria can also relate to human behavior or activity that may be considered to impose a risk of injury or harm to oneself or others, including physical and/or mental injury to individuals and/or property.” Palamadai, [0072]), wherein the anomaly includes a change in the speed of movement associated with the user (“A sudden, abrupt or violent change in velocity of the user while performing activities having a stipulated range of speed may be indicative of an adverse situation such as a physical attack, a car crash or other types of accidents including traffic and non-traffic accidents... For example, if the activities being undertaken by the user are within a range of speeds or have a range of changes in speeds, a speed or a speed change exceeding a safety threshold would be classified as an abrupt or violent change.” Millican, [0136]), a change in the speed of movement associated with the participant in the environment, or a change in the speed of movement associated with the object in the environment; and upon determining that the anomaly is a threat to the user, notify the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113], wherein the incidents involve anomalies including, for example, “An incident can also refer to any human behavior or activity that may be considered to impose a risk of injury or harm to oneself or others, including physical and/or mental injury to individuals and/or property.” Palamadai, [0073]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 15, Palamadai, Flick and Millican disclose the system of claim 14, comprising instructions to: obtain information associated with the user of the UE including age, gender, and a medical condition associated with the user; and upon determining the anomaly is occurring in the environment, determine whether the anomaly is the threat to the user based on the user’s age, gender, and the medical condition (“The incident risk criteria can also relate to one or more behaviors, activities and/or appearances of specific users or user profiles (or avatars) considered to be non-conforming or potentially non-conforming to acceptable user behavior or activity for a particular location or environment (e.g., based on user age, gender and other demographics, based on user history, based on user roles relative to a location/environment, based on user permission relative to a location/environment, etc.)” Palamadai, [0072]; “In another example, the monitoring component 204 can be configured to identify incidents or potential incidents based on a user health parameter received in the user activity/context data 101 satisfying an incident risk criterion (e.g., heart rate exceeding a threshold, blood pressure, temperature too high, etc.)” Palamadai, [0074]). Regarding claim 16, Palamadai, Flick and Millican disclose the system of claim 14, comprising instructions to: obtain a history of video recordings of the environment; determine whether a significant difference exists between the history of video recordings and the video recording, wherein the significant difference includes a weapon in the video recording, or an erratic person in the video recording; and upon determining that the significant difference exists between the history of video recordings and the video recording (“For example, in some embodiments, the surveillance system 120 can characterize some locations as historical incident locations based on known associations of the locations with incidents or a high risk of incidents in the past. For instance, certain environments may be known to have a higher rate or risk/rate of incidents based on known risks associated with the locations (e.g., attributed to the natural landscape, weather, activities performed at the locations, etc.) historical user activity at the locations, such as certain neighborhoods, buildings, event locations, crowded areas (e.g., airports, stadiums), certain natural environments (e.g., dangerous hiking trails, rapid waters, etc.) and so on. In some implementations of these embodiments, a triggering event or condition can include reception of any input data 105 and/or reception of specific events or conditions in the input data for a historical incident location. Information identifying and describing historical incident locations can be included in the incident assessment data 220.” Palamadai, [0078]; Additionally, see Palamadai [0085] “As noted above, incident criteria can relate to, but is not limited to: the behavior, activity and/or appearance (e.g., clothing/attire, objects held/carried, physical appearance, etc.) of a user in solitude; the behavior, activity and/or appearance of a group of users; identities and attributes of the individual or individual involved (e.g., known profiles and history of the individual/individuals, number of individuals, user demographics, user role relative to the environment, user permissions relative to the environment, etc.); other people (not involved in the incident) included in the environment or location (e.g., number of other people and identities and attributes of the other people); and other contextual factors associated with the incident and/or environment (e.g., regarding environmental conditions associated with the environment, objects in the environment, relative positions of the objects, time of day/year, events, weather, etc.)”), notify the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113]). Regarding claim 17, Palamadai, Flick and Millican disclose the system of claim 14, comprising instructions to: obtain a location associated with the UE, and a crime report associated with the location (“In one or more embodiments, the surveillance system can identify incident locations or potential incident location based on analysis of received user activity data reported via respective security monitoring devices and defined information (e.g., rules, algorithms, models, etc.) correlating certain user behaviors, activities and/or attributes to incidents. Additionally, or alternatively, the surveillance system can identify incident locations and/or potential incident locations based on user provided incident reports... The incident types or categories can include a wide range of different types of incidents depending on the types of incidents the surveillance system is adapted to monitor and evaluate, including various types of criminal activity and non-criminal activity” Palamadai, [0017]); obtain, from the crime report, demographic information associated with a victim; obtain demographic information associated with the user; determine whether the demographic information associated with the user matches the demographic information associated with the victim (“The surveillance system can further evaluate and characterize incident locations and/or corresponding incidents associated with the locations based on respective types of the incidents (i.e., the particular user activity of behavior corresponding to the incident) and various other factors that influence how to respond to the incidents in an optimal manner. For example, the various other factors can relate to (but are not limited to), the location of the incident, context of the incident, the individual or individuals involved (e.g., identities, known profiles and history of the individual/individuals, number of individuals, user demographics, etc.), the probability of escalation or occurrence of the incident, the amount and validity of received incident reports about the incident, and known and forecasted risk associated with the incident.” Palamadai, [0021]); and upon determining that the demographic information associated with the user matches the demographic information associated with the victim (“the surveillance system can employ ML and AI to determine the or facilitated determining the targeted surveillance protocols based on relevant information about the respective incidents, including the incident type, severity, the location, context of the individual, the individual or individuals involved (e.g., identities, known profiles and history of the individual/individuals, number of individuals, user demographics, etc.), the probability of escalation or occurrence of the incident, the amount and validity of received incident reports about the incident, and known and forecasted risks associated with the incident.” Palamadai, [0022]), notify the user of the threat (“The surveillance responses can also include sending notifications and alerts to UE 106 via the communication network 110 regarding detected incidents.” Palamadai, [0113]). Regarding claim 18, Palamadai, Flick and Millican disclose the system of claim 14, comprising instructions to: obtain a location associated with the UE (“the respective security monitoring devices can be associated with IUDs and location information that identifies their respective locations (e.g., fixed or mobile) where their corresponding data was captured” Palamadai, [0066]); determine whether a weather warning associated with the location exists (“The computing systems can also include various other systems 114 accessible via the communication network 110 that can be used by the surveillance system 120 to gather relevant information related to monitoring, evaluating and responding to incidents, to communicate information with regarding detected incidents and potential incidents, and/or to interface with in association with performing responses to incidents (e.g., background checking systems, emergency services systems, health information systems, weather systems” Palamadai, [0034]); determine whether the audio recording indicates a sound of distress associated with the user (“Audio recordings captured in an environment can also be processed using automated audio recognition and interpretation technologies to determine words spoken (e.g., using voice-to-text and natural language processing (NLP)), and other attributes (e.g., tone of voice, volume, pace, slurring of words, etc.). Audio recordings comprising a totality of sounds from an environment can also be processed using automated audio recognition to characterize the context of the environment (e.g., number of people in the environment, type of activity occurring in the environment, etc.)” Palamadai, [0065]); upon determining that the weather warning exists and that the audio recording indicates the sound of distress associated with the user (“incident analysis component 302 can provide for determining relevant attributes associated with an incident or potential incidents based on the monitored input data 105 that can facilitate identifying incidents or potential incidents by the incident identification component 310 and determining optimal responses for responding to the incidents... other contextual factors associated with the incident and/or environment (e.g., regarding environmental conditions associated with the environment, objects in the environment, relative positions of the objects, time of day/year, events, weather, etc.)” Palamadai, [0083]), obtain a frequent contact associated with the UE; and notify the frequent contact of the threat to the user (“When the platform is in receipt of signals from the apparatus 100 signifying occurrence of an emergency situation, the platform may react or counteract by reporting to the police or other responsible authorities and/or to contact with a person or persons pre-registered by the account user. For example, the person may be a next-of-kin or a family member.” Millican, [0077]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 19, Palamadai, Flick and Millican disclose the system of claim 14, comprising instructions to: upon determining that the anomaly is the threat to the user, obtain an emergency contact associated with the user (“the user experiencing a sudden fall, or accident involving violent movement; the scene monitor 110 experiencing/sensing recognizably threatening images, such as a fire, a weapon, a known felon; the scene monitor 110 experiencing/sensing audible warnings in the environment, for example, a gunshot, a scream, a siren or key words spoken to trigger and emergency event” Millican, [0082]); and notify the emergency contact associated with the user of the threat to the user (“Any friends, family or emergency services (emergency contacts) selected by the user are then notified of an event” Millican, [0092]). Therefore, combining Palamadai, Flick and Millican would meet the claim limitations for the same reasons as previously discussed in claim 1. Claim(s) 7, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai et al. in view of Flick et al. in view of Millican et al. and in further view of Sutherland (US 20220007164 A1). Regarding claim 7, Palamadai, Flick and Millican disclose the non-transitory, computer-readable storage medium of claim 1, comprising instructions to: upon determining that the anomaly is the threat to the user, obtain a contact information associated with a second mobile device (“the user experiencing a sudden fall, or accident involving violent movement; the scene monitor 110 experiencing/sensing recognizably threatening images, such as a fire, a weapon, a known felon; the scene monitor 110 experiencing/sensing audible warnings in the environment, for example, a gunshot, a scream, a siren or key words spoken to trigger and emergency event” Millican, [0082]; “The platform may generate an automated distress report to an external destination by machine operated data communication upon detection of a safety-threatening adverse situation. The external destination may be a call center of law enforcement agencies, rescue operations and/or personal contacts of the user” Millican, [0120]). The combination of Palamadai, Flick and Millican as a whole do not expressly disclose wherein the second mobile device is associated with a second user, wherein the second user is physically proximate to the user, send an alert to the second mobile device. However, Sutherland in the same field of endeavor teaches wherein the second mobile device is associated with a second user, wherein the second user is physically proximate to the user, send an alert to the second mobile device (“the emergency alert server system can transmit the alert to other devices registered to the user within the proximity of the mobile device. If there is still no reply from the user, the emergency alert server system can transmit a message or alert to second user indicating that the second user may want to check on the first user.” Sutherland, [0087]). Therefore, it would have been obvious to one of ordinary skill in the art to combine Palamadai, Flick, Millican and Sutherland before the effective filing date of the claimed invention. The motivation for this combination of references would have been to quickly and easily identify groups of people in the area such that these people may be quickly contacted in the event of an emergency (Sutherland, [0004]). This motivation for the combination of Palamadai, Flick, Millican and Sutherland is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Regarding claim 13, Palamadai, Flick, Millican and Sutherland disclose the method of claim 8, comprising: upon determining that the anomaly is the threat to the user, obtaining a contact information associated with a second UE (“the user experiencing a sudden fall, or accident involving violent movement; the scene monitor 110 experiencing/sensing recognizably threatening images, such as a fire, a weapon, a known felon; the scene monitor 110 experiencing/sensing audible warnings in the environment, for example, a gunshot, a scream, a siren or key words spoken to trigger and emergency event” Millican, [0082]; “The platform may generate an automated distress report to an external destination by machine operated data communication upon detection of a safety-threatening adverse situation. The external destination may be a call center of law enforcement agencies, rescue operations and/or personal contacts of the user” Millican, [0120]), wherein the second UE is associated with a second user, wherein the second user is physically proximate to the user; and sending an alert to the second UE that the anomaly is the threat to the user (“the emergency alert server system can transmit the alert to other devices registered to the user within the proximity of the mobile device. If there is still no reply from the user, the emergency alert server system can transmit a message or alert to second user indicating that the second user may want to check on the first user.” Sutherland, [0087]). Therefore, combining Palamadai, Flick, Millican and Sutherland would meet the claim limitations for the same reasons as previously discussed in claim 7. Regarding claim 20, Palamadai, Flick, Millican and Sutherland disclose the system of claim 14, comprising instructions to: upon determining that the anomaly is the threat to the user, obtain a contact information associated with a second UE (“the user experiencing a sudden fall, or accident involving violent movement; the scene monitor 110 experiencing/sensing recognizably threatening images, such as a fire, a weapon, a known felon; the scene monitor 110 experiencing/sensing audible warnings in the environment, for example, a gunshot, a scream, a siren or key words spoken to trigger and emergency event” Millican, [0082]; “The platform may generate an automated distress report to an external destination by machine operated data communication upon detection of a safety-threatening adverse situation. The external destination may be a call center of law enforcement agencies, rescue operations and/or personal contacts of the user” Millican, [0120]), wherein the second UE is associated with a second user, wherein the second user is physically proximate to the user; and send an alert to the second UE that the anomaly is the threat to the user (“the emergency alert server system can transmit the alert to other devices registered to the user within the proximity of the mobile device. If there is still no reply from the user, the emergency alert server system can transmit a message or alert to second user indicating that the second user may want to check on the first user.” Sutherland, [0087]). Therefore, combining Palamadai, Flick, Millican and Sutherland would meet the claim limitations for the same reasons as previously discussed in claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Younge (US 20240194046 A1) discloses personal-assistance system for threat detection and convenience Saxena et al. (US 20170127257 A1) discloses a personal safety system using an alert detection module to indicate an alert even and broadcast it to a plurality of devices in the wireless environment. Stivi et al. (US 20200175767 A1) discloses system for dynamically identifying hazards, routing resources, and monitoring a location of a hazard and predicting its movement based on received information. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMANUEL SILVA-AVINA whose telephone number is (571)270-0729. The examiner can normally be reached Monday - Friday 11 AM - 8 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMANUEL SILVA-AVINA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597141
SYSTEM FOR OPTICAL DETERMINATION OF COEFFICIENT OF FRICTION FOR A SURFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591996
Visual Localization Method and Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12586251
PATCH ZIPPERING FOR MESH COMPRESSION
2y 5m to grant Granted Mar 24, 2026
Patent 12586258
NON-ADVERSARIAL IMAGE GENERATION USING TRANSFER LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12579679
System and Method for Identifying Feature in an Image of a Subject
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
86%
With Interview (+4.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 66 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month