Prosecution Insights
Last updated: April 19, 2026
Application No. 15/948,931

IoT DEVICE SECURITY

Non-Final OA §103
Filed
Apr 09, 2018
Examiner
SAVENKOV, VADIM
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
10 (Non-Final)
62%
Grant Probability
Moderate
10-11
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
193 granted / 312 resolved
+3.9% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
51 currently pending
Career history
363
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 312 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/30/2025 has been entered. Information Disclosure Statement The 6/25/2025 and 10/24/2025 IDS documents have been considered by the examiner. Response to Amendment / Arguments Regarding claims rejected under 35 USC 103: Applicant’s arguments, in view of the amended claim language, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Epstein (US 2017/0262523 A1). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: the “network administration engine; domain knowledge datastore; IoT device demographics generation engine; IoT personality datastore; personality classification engine; signal correlation engine; new personality discovery engine; personality aware enrichment engine; and offline modeling engine” in claims 12-22. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. For instance, paragraphs [0027]-[0030] of the specification describing engines and datastores. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 10-16, 21-22, and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Devi Reddy (US 2017/0118240 A1), hereinafter “Reddy,” in view of Epstein (US 2017/0262523 A1). Regarding claim 1, Reddy discloses: A method comprising: for a first device included in a plurality of Internet of Things (IoT) devices (e.g., FIG. 1 and [0032] of Reddy concerning client devices; as per [0132]-[0133] of Reddy, the client may be an IoT device): identifying a context of the device in operation at least in part by analyzing packets sent to and from the device, determining a set of events associated with the device and occurring within a window, and aggregating a plurality of events within the window based on the identified context; Refer to at least [0038]-[0056] and [0065]-[0066] of Reddy with respect to collecting raw data such as network traffic and DPI logs, aggregating the data, and timestamps and normalization associated with the context of the raw data. performing common factor aggregation of enriched metadata derived from event parameters associated with the plurality of IoT devices to obtain aggregated metadata permutations, wherein the first device has at least a first behavior aggregation factor and a second behavior aggregation factor and wherein a second device included in the plurality of IoT devices has a third behavior aggregation factor that is different from the first and second behavior aggregation factors; Refer to at least [0104], [0070]-[0071], [0105], and FIG. 7, of Reddy with respect to determining properties of identified entities and information such as the type and/or MAC address of the entity. Shared properties are identified and mapped. Refer to at least FIG. 6 and [0060]-[0067] of Reddy with respect to data normalization. using machine learning to obtain domain knowledge, including knowledge regarding at least one bad IoT personality, from a network administration engine, wherein a given personality includes one or more modelled behavior patterns, and wherein the bad IoT personality includes at least one modelled behavior pattern indicative of undesired behavior; Refer to at least [0091], [0096], and [0058] of Reddy with respect to obtaining third-party and/or administrator-provided information of known security threats. Refer to at least [0090] and [0113] of Reddy with respect to generating machine-learned models of the entities. Examples include behavior such as the range of IP addresses with which an entity communicates, as well as a learned type of the entity (user, device, role). defining a personality, including data samples associated with the personality, using the aggregated metadata permutations obtained by performing the common factor aggregation, the domain knowledge, and prior personality data set feedback from a new personality profile discovery engine; Refer to at least FIG. 5, FIG. 6, [0117], and [0113] of Reddy with respect to generating machine-learned models of the entities based on information from the previous steps. classifying the personality using the data samples and IoT personality models, wherein the personality has a signal associated therewith; and correlating the signal to reach a verdict and, when the personality is a bad personality, providing bad personality feedback associated with the personality to the network administration engine. Refer to at least 540-570 in FIG. 5 and [0098] of Reddy with respect to determining a threat score and a resultant remedial action, such as alerting a user. Refer to at least FIG. 6 and [0062] of Reddy with respect to feedback. Reddy does not fully disclose: wherein classifying the personality includes identifying, and providing as output, that the first device is of a specific first model, wherein other devices of the specific first model share a set of tag labels that describe particular behaviors taken by devices of the specific first model, and wherein a given tag included in the set of tag labels indicates at least one of: (1) a particular operating system that devices of the specific first model run, or (2) a particular home server location that devices of the specific first model connect to. However, Reddy in view of Epstein discloses: wherein classifying the personality includes identifying, and providing as output, that the first device is of a specific first model, Refer to at least [0014], [0023], and [0032] of Epstein with respect to outputting a name for device identification based on device signatures. For example, “ACME TV XT430” in FIG. 2 of Epstein. wherein other devices of the specific first model share a set of tag labels that describe particular behaviors taken by devices of the specific first model, Refer to at least FIG. 2, [0017], and [0019] of Epstein with respect to clustering device signatures (signature data as in [0015] and [0028] of Epstein), where clusters are associated with devices having the same device model. For example, cluster 38(2) in FIG. 2 of Epstein is associated with the device model “ACME TV XT430.” and wherein a given tag included in the set of tag labels indicates at least one of: (1) a particular operating system that devices of the specific first model run, or (2) a particular home server location that devices of the specific first model connect to. Refer to at least [0017], [0019], and [0028] of Epstein with respect to an operating system attribute as part of the signature clustering. The teachings of Epstein likewise concern detecting security threats in a network and classifying network devices, and are considered to be within the same field of endeavor and combinable as such. Therefore it would have been obvious to one of ordinary skill in the art before the filing date of Applicant’s invention to modify the teachings of Reddy to further implement more specific device identification using device signature clustering as in Epstein for at least the purpose of improving classification accuracy and security enforcement (e.g., [0002] and [0013] of Epstein stating that “[f]or security and other reasons it is very useful to know the type, make and model of each connected device in the home in order to make appropriate decisions based on the device type, make and model”). Regarding claim 2, Reddy-Epstein discloses: The method of claim 1, wherein the personality is built by mathematically modeling a behavior pattern using the event parameters. Refer to at least [0009] and [0030] of Reddy with respect to models of entity behavior. Regarding claim 3, it is rejected for substantially the same reasons as claim 1 above (i.e., the citations concerning the raw data, normalization, relationships, and graph). Regarding claim 4, it is rejected for substantially the same reasons as claims 1 and 3 above. Regarding claim 5, Reddy-Epstein discloses: The method of claim 1, wherein the aggregated metadata permutations are aggregated over a data rollup window that varies based on the context of the IoT device. Refer to at least [0071]-[0072] and [0065]-[0066] of Reddy with respect to timeframes and timestamps associated with a context provided by device and network data. Regarding claim 10, Reddy-Epstein discloses: The method of claim 1, comprising: computing a degree of risk of undesirable behavior; and generating a bad personality alert if the degree of risk of undesirable behavior exceeds an actionable intelligence threshold. Refer to at least [0031] of Reddy with respect to a threshold for a threat score. Regarding claim 11, it is rejected for substantially the same reasons as claim 1 above (e.g., [0091], [0096], and [0058] of Reddy). Regarding independent claim 12, it is substantially similar to claim 1 above, and is therefore likewise rejected (i.e., see the citations). Regarding claims 13-16 and 21-22, they are substantially similar to claims 2-5 and 10-11 above, and are therefore likewise rejected. Regarding claim 25, it is rejected for substantially the same reasons as claim 1 above (i.e., the citations and obviousness rationale; the cited portions of Epstein concerning the operating system as a signature attribute used for clustering). Claim(s) 6-9 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reddy-Epstein as applied to claims 1-5, 10-16, 21-22, and 25 above, and further in view of Sukhomlinov (US 2017/0180399 A1). Regarding claim 6, Reddy-Epstein does not specify: further comprising: performing offline modeling using the data samples; and updating the IoT personality models with the offline modeling. However, Reddy-Epstein in view of Sukhomlinov discloses: further comprising: performing offline modeling using the data samples; and updating the IoT personality models with the offline modeling. Refer to at least [0022], [0032], and [0056] of Sukhomlinov with respect to online or offline model creation and updating. The teachings of Reddy-Epstein and Sukhomlinov concern machine learning and behavioral modeling, and are considered to be within the same field of endeavor and combinable as such. Therefore it would have been obvious to one of ordinary skill in the art before the filing date of Applicant’s invention to modify the teachings of Reddy-Epstein to further include offline modeling because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art at the time (e.g., as per the cited portions of Sukhomlinov concerning online and offline creation/updating). Regarding claim 7, it is rejected for substantially the same reasons as claims 1 and 6 above (i.e., the citations and obviousness rationale). Regarding claim 8, Reddy-Epstein-Sukhomlinov discloses: The method of claim 1, comprising: recognizing behavior patterns of the IoT device using either or both learned state-transition learning and deep learning. Refer to at least [0056] of Sukhomlinov with respect to model training and states. This claim would have been obvious for substantially the same reasons as claim 6 (i.e., substitution of known-in-the-art machine learning techniques and technologies). Regarding claim 9, Reddy-Epstein-Sukhomlinov discloses: The method of claim 1, comprising: recognizing behavior patterns of the IoT device using either or both a neural network graph of past behavior patterns of the IoT device recognized using deep learning and a state transition graph of the past behavior patterns of the IoT device recognized using learned state-transition learning. Refer to at least [0040]-[0041] and [0056] of Sukhomlinov with respect to neural networks and algorithms. This claim would have been obvious for substantially the same reasons as claim 6 (i.e., substitution of known-in-the-art machine learning techniques and technologies). Regarding claims 17-20, they are substantially similar to claims 6-9 above, and are therefore likewise rejected. Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reddy-Epstein as applied to claims 1-5, 10-16, 21-22, and 25 above, and further in view of Lin (US 9,038,178 B1). Regarding claim 26, Reddy-Epstein does not specify: wherein the first device is tagged with a tag label that indicates as an associated behavior that it connects to a home server in a particular location. However, Reddy-Epstein in view of Lin discloses: wherein the first device is tagged with a tag label that indicates as an associated behavior that it connects to a home server in a particular location. Refer to at least FIG. 6, Col. 2, Ll. 65-Col. 3, Ll. 2, and Col. 11, Ll. 25-35, and Col. 12, Ll. 19-41 of Lin with respect to logging device behaviors, including contacting an external server (e.g., malware phoning to a home C&C server). Behavior features include geolocation information of the external server (e.g., Col. 8, Ll. 25-29 of Lin. The teachings of Reddy-Epstein and Lin concern machine learning and behavioral modeling, and are considered to be within the same field of endeavor and combinable as such. Therefore it would have been obvious to one of ordinary skill in the art before the filing date of Applicant’s invention to modify the teachings of Reddy-Epstein to further include obtaining beaconing activities as part of behavior features for at least the purpose of better detecting APT attacks and infiltrations (e.g., Col. 2, Ll. 4-9 of Lin). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VADIM SAVENKOV whose telephone number is (571)270-5751. The examiner can normally be reached 12PM-8PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jeffrey Nickerson/Supervisory Patent Examiner, Art Unit 2432 /V.S/Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Apr 09, 2018
Application Filed
Aug 28, 2020
Non-Final Rejection — §103
Oct 26, 2020
Interview Requested
Nov 23, 2020
Applicant Interview (Telephonic)
Dec 02, 2020
Examiner Interview Summary
Dec 22, 2020
Response Filed
Mar 26, 2021
Final Rejection — §103
May 17, 2021
Interview Requested
May 28, 2021
Applicant Interview (Telephonic)
Jun 01, 2021
Examiner Interview Summary
Jun 07, 2021
Request for Continued Examination
Jun 08, 2021
Response after Non-Final Action
Sep 30, 2021
Non-Final Rejection — §103
Dec 12, 2021
Interview Requested
Jan 03, 2022
Applicant Interview (Telephonic)
Jan 05, 2022
Response Filed
Mar 09, 2022
Examiner Interview Summary
May 07, 2022
Final Rejection — §103
May 29, 2022
Interview Requested
Jun 17, 2022
Applicant Interview (Telephonic)
Jun 17, 2022
Examiner Interview Summary
Jul 13, 2022
Response after Non-Final Action
Jul 31, 2022
Request for Continued Examination
Aug 03, 2022
Response after Non-Final Action
Jan 27, 2023
Non-Final Rejection — §103
Mar 31, 2023
Interview Requested
May 03, 2023
Applicant Interview (Telephonic)
May 05, 2023
Examiner Interview Summary
May 08, 2023
Response Filed
Aug 12, 2023
Non-Final Rejection — §103
Nov 03, 2023
Interview Requested
Nov 27, 2023
Applicant Interview (Telephonic)
Nov 27, 2023
Response Filed
Dec 02, 2023
Examiner Interview Summary
Mar 09, 2024
Final Rejection — §103
Jul 05, 2024
Interview Requested
Aug 13, 2024
Applicant Interview (Telephonic)
Aug 21, 2024
Request for Continued Examination
Aug 23, 2024
Examiner Interview Summary
Aug 25, 2024
Response after Non-Final Action
Nov 30, 2024
Non-Final Rejection — §103
Mar 04, 2025
Interview Requested
Apr 10, 2025
Response Filed
Jun 16, 2025
Final Rejection — §103
Sep 18, 2025
Interview Requested
Sep 24, 2025
Examiner Interview Summary
Sep 24, 2025
Applicant Interview (Telephonic)
Sep 30, 2025
Request for Continued Examination
Oct 05, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602484
DOCKER IMAGE VULNERABILITY INSPECTION DEVICE AND METHOD FOR PERFORMING DOCKER FILE ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12585783
Graph-Based Approach Towards Hardware Trojan Vulnerability Analysis
2y 5m to grant Granted Mar 24, 2026
Patent 12587520
PERSONALISED, SERVER-SPECIFIC AUTHENTICATION MECHANISM
2y 5m to grant Granted Mar 24, 2026
Patent 12566872
DEVICE, METHOD, AND GRAPHICAL USER INTERFACE FOR ACCESSING AN APPLICATION IN A LOCKED DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12500778
SYSTEMS AND METHODS FOR MANAGING PUBLIC KEY INFRASTRUCTURE CERTIFICATES FOR COMPONENTS OF A NETWORK
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

10-11
Expected OA Rounds
62%
Grant Probability
83%
With Interview (+20.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 312 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month