Prosecution Insights
Last updated: April 18, 2026
Application No. 18/632,369

MOBILE DEVICE SECURITY PROFILING

Final Rejection §103
Filed
Apr 11, 2024
Examiner
GYORFI, THOMAS A
Art Unit
2435
Tech Center
2400 — Computer Networks
Assignee
Verizon Patent and Licensing Inc.
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
517 granted / 687 resolved
+17.3% vs TC avg
Strong +17% interview lift
Without
With
+16.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
20 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 687 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 remain for examination. The amendment filed 11/10/25 amended claims 1, 5, 10, 11, 15, 17, 18, and 20. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page of the amendment filed 11/10/25, with respect to the rejection(s) of claims 1, 11, & 17 under 35 USC 102 in view of Gupta have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of the newly discovered reference to Salajegheh. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta (U.S. Patent Publication 2016/0232353) in view of Salajegheh (U.S. Patent Publication 2016/0277435) in view of Mercado-Alcala (U.S. Patent 12,259,976). Regarding claim 1: Gupta discloses a method, comprising: creating a plurality of device behavior security models (paragraph 0101: “Also in block 404, the processing core may use a full classifier model received from a network server to generate a lean classifier model or a family of lean classifier models of varying levels of complexity (or “leanness”)”); detecting a device connected to the communication network (e.g. paragraphs 0046-0047); identifying device profile information associated with the device (paragraph 0047: “The server, which may be a server in a communication network or a server accessible via the Internet, may be configured to receive information on various risk levels, capabilities, states, conditions, features, behaviors and corrective actions from a central database (e.g., the “cloud”) and/or from many computing devices, and use this information to generate a full classifier model (i.e., a data or behavior model)…”); selecting a device behavior security model from the plurality of available device behavior security models based upon the device profile information matching the device behavior security model more closely than one or more of the other device behavior security models (Ibid; see also paragraph 0102: “…the processing core may select the leanest classifier in the family of lean classifier models (i.e., the model based on the fewest number of different mobile device states, features, behaviors, or conditions) that has not yet been evaluated or applied by the mobile device. In an embodiment, this may be accomplished by the processing core selecting the first classifier model in an ordered list of classifier models”); transmitting the device behavior security model to the device (paragraphs 0047, Ibid; and paragraph 0048: “The server may then send the full or lean classifier models (i.e., information structures that include the finite state machine and/or family of boosted decision stumps, etc.) to the computing device”), wherein the device compares device operating activity to the device behavior security model to determine whether the device operating activity is within normal operating behavior thresholds of the device behavior security model (paragraphs 0104 and 0112); and executing a remedial action based upon the device operating activity not being within the normal operating behavior thresholds (paragraph 0114). Gupta appears to be silent regarding the server [i.e. the “security profile system” of the claims] being the entity that creates the plurality of models and decides which one to transmit to the client device based on matching the device security profile. However, Salajegheh discloses a related invention for using device-specific security models that can be used to detect anomalous behavior in a particular device (Salajegheh, Abstract) wherein that invention, upon receiving device profile information, can in at least one embodiment choose the appropriate model based on said device profile information and transmit it to the device (Salajegheh, paragraph 0055: “The analyzer module may lookup a stored classifier model associated with the device from which the behavior information was obtained” and paragraph 0102: “In block 506, the smart device processor may identify one or more classifier models associated with the respective smart device. For example, the processor may execute a lookup on a local storage or may determine that the appropriate classifier model(s) was transmitted from an observer device or other control smart device along with the behavior information”). It would have been obvious prior to the effective filing date of the instant application for Gupta to have a plurality of device models in storage and select the appropriate one to transmit to a requesting device based on the device-specific information as taught by Salajegheh, as this was clearly a known option within the grasp of a person of ordinary skill in the art in order to achieve the predictable effect of providing a device-specific classification model to a requesting device. Neither Gupta nor Salajegheh explicitly disclose wherein the models are generated based on a simulation of device operation within a communication network. However, Mercado-Alcala discloses a related invention for detecting malware on a device using a machine learning model (e.g. Abstract) where one can train the onboard machine learning model via the use of simulating operations on the device (col. 29, lines 34-57; see also col. 31, lines 20-30 regarding simulating normal behavior). It would have been obvious prior to the effective filing date of the instant application for Gupta to enable simulation functionality to train and refine the machine learning models used to detect malware in that invention, as the techniques disclosed by Mercado-Alcala were known to improve resilience to zero-day attacks relative to signature-based detection methods (Mercado-Alcala, Ibid; see also col. 3, lines 30-45). Regarding claim 2: Gupta further discloses: receiving updated device profile information from the device in response to a trigger event occurring (paragraph 0028: “For instance, increasing the level of security/scrutiny may include performing at least one test to determine whether the device has a rootkit, performing at least one test to determine whether the device has a Trojan, performing at least one test to determine whether the device has undesirable software, and reviewing the history of events associated with the device to determine whether there is an indication of an undesirable event, the updating of a software, or the generation of an alert message… In an embodiment, the computing device may be configured to increase its security/scrutiny levels in response to receiving a notification of an increased security risk from another computing device” [emphasis Examiner’s]); selecting a different device behavior security model from currently available device behavior security models based upon the updated device profile information corresponding to the different device behavior security model (paragraph 0031: “In various embodiments, the computing device may be configured to determine the risk parameter value, threshold value, and/or time period based on a “distance.” This distance may be a physical distance between the current device and the device that detected an instance of non-benign behavior (i.e., the affected device). Alternatively or in addition, the distance may be an amount of time that has elapsed since an instance of non-benign behavior was detected, the number of times the computing device has been rebooted, the number of times a software application has been updated, differences between software versions, make/model/version/feature/hardware differences between the transmitting and receiving computing devices, etc.”); and transmitting the different device behavior security model to the device for replacing the device behavior security model (paragraph 0055: “The network server 116 may also send classification and modeling information to the mobile devices 102 to replace, update, create and/or maintain mobile device data/behavior models”). Regarding claim 3: Gupta further discloses: monitoring for the trigger event as at least one of installation of a new application on the device, a firmware update, an operating system update, a location change, or an identified new user behavior (paragraph 0031, Ibid). Regarding claim 4: Gupta further discloses: generating the device behavior security model to characterize input device operating activity as corresponding to normal device behavior and abnormal device behavior, wherein the device utilizes the device behavior security model to become self-aware of normal and abnormal device behavior (paragraph 0104: “For example, the processing core may determine whether these results may be used to classify a behavior as either malicious or benign with a high degree of confidence, and if not treat the behavior as suspicious”; see also paragraph 0118 regarding maintaining databases of normal operational behaviors). Regarding claim 5: The rejection of claim 1 applies mutatis mutandis to claim 5, specifically noting that Gupta supports a plurality of devices, each of which can receive their own classifier model (e.g. paragraph 0055: “In an embodiment, the network server 116 may be configured to send data/behavior models to the mobile device 102, which may receive and use the data/behavior models to identify suspicious or performance-degrading mobile device behaviors, software applications, processes, etc. The network server 116 may also send classification and modeling information to the mobile devices 102 to replace, update, create and/or maintain mobile device data/behavior models.”) Regarding claim 6: Gupta further discloses: receiving updated device profile information from the device, wherein the updated device profile information includes at least one of a device type, an operating system version, a firmware version, location information, installed applications, device activity, data streams, network messages, or classifications of normal or abnormal behavior detected by the device using the device behavior security model (e.g. paragraphs 0031, 0068, & 0093); and generating a new device behavior security model based upon the updated device profile information (Ibid; see also paragraph 0055). Regarding claim 7: Gupta further discloses: executing the remedial action to at least one of blacklisting an application, block execution of the application, disconnect from the communication network, restart the device, or generate an alert (generation of an alert message at paragraph 0028). Regarding claim 8: Gupta further discloses: training the device behavior security model to detect abnormal behavior corresponding to a security attack by devices, abnormal signals generated by the devices, abnormal messages exchanged by the devices with the communication network, or a denial of service attack (e.g. monitoring for unauthorized SMS messages at paragraphs 0004, 0035, 0051, 0059. 0062, 0088, etc.). Regarding claim 9: Gupta further discloses wherein the device profile information includes at least one of a device type, an operating system version, a firmware version, location information, or installed applications (paragraphs 0030-0031). Regarding claim 10: Gupta further discloses: representing the plurality of available device behavior security models as vectors (paragraphs 0006-0007, 0010-0011, 0040-0041, etc.); generating a device vector using the device profile information (Ibid); and comparing the device vector to the vectors to identify the device behavior security model (Ibid). Regarding claims 11 and 17: The rejection of claim 1 applies mutatis mutandis to each of claims 11 and 17. Regarding claims 12 and 18: The combination further discloses executing simulations of the device to generate normal behavior logs for training the plurality of device behavior security models for different applications and use cases (Mercado-Alcala, col. 29, lines 34-57; see also col. 31, lines 20-30 regarding simulating normal behavior). Regarding claim 13: The combination further discloses wherein the operations further comprise: executing the simulations to take into account a device type, a device profile, and software running on the device (Mercado-Alcala, col. 4, lines 45-55; and col. 7, lines 55-61; see also Gupta at paragraphs 0030-0031). Regarding claims 14 and 19: Gupta further discloses wherein the operations further comprise: comparing, by the device, system logs with the device behavior security model to determine a likelihood of abnormal behavior being exhibited by the device (paragraphs 0058 & 0085). Regarding claims 15 and 20: Gupta further discloses wherein the operations further comprise: in response to receiving operational information from devices, performing model tuning for the available device behavior security models (e.g. paragraph 0049: “The computing device may also use the classifier models to generate even leaner classifier models locally in the computing device. To accomplish this, the computing device may prune or cull the robust family of boosted decision trees included in the classifier model received from the server to generate a leaner classifier model that includes a reduced number of boosted decision trees and/or evaluates a limited number of test conditions or features”). Regarding claim 16: Gupta further discloses wherein the operations further comprise: generating a new device behavior security model based upon logs of device activity and clustering techniques (logs of device activity at paragraphs 0058 & 0085; clustering techniques at paragraph 0046: “Each computing device in the trusted network may be configured to perform collaborative learning operations that include sharing risk information, behavior vectors, classifier models, the results of the analysis operations, and other similar information with other computing devices in the trusted network.”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Thomas A Gyorfi whose telephone number is (571)272-3849. The examiner can normally be reached 10:00am - 6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amir Mehrmanesh can be reached at 571-270-3351. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. THOMAS A. GYORFI Examiner Art Unit 2435 /THOMAS A GYORFI/Examiner, Art Unit 2435 3/25/2026 /AMIR MEHRMANESH/Supervisory Patent Examiner, Art Unit 2435
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
Aug 08, 2025
Non-Final Rejection — §103
Nov 06, 2025
Examiner Interview Summary
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Response Filed
Mar 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587557
DETECTION METHOD OF NETWORK ANOMALY AND ANOMALY DETECTION APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579278
AD-HOC GRAPH PROCESSING FOR SECURITY EXPLAINABILITY
2y 5m to grant Granted Mar 17, 2026
Patent 12568101
NETWORK ANOMALY DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12563032
CHAT-BOT ASSISTED AUTHENTICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12556578
SYSTEM AND METHOD FOR DETERMINING AND PREVENTING MALFEASANT ACTIVITY IN A PRIVATE DISTRIBUTED NETWORK
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+16.8%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 687 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month