Prosecution Insights
Last updated: April 19, 2026
Application No. 18/187,520

TELEMETRY DATA BASED COUNTERFEIT DEVICE DETECTION

Non-Final OA §103
Filed
Mar 21, 2023
Examiner
SURVILLO, OLEG
Art Unit
2457
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
405 granted / 561 resolved
+14.2% vs TC avg
Strong +28% interview lift
Without
With
+28.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
25 currently pending
Career history
586
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 561 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 29, 2026 has been entered. Response to Amendment Claims 1-7 and 21-33 are pending in the application. Claims 1-2, 4-5, 7, 21-22, 24-25, 27-29, and 31-32 are currently amended. Claims 8-20 have been canceled. No new claims are currently added. Response to Arguments With regard to Applicant’s remarks dated January 29, 2026: Regarding the rejection of claims 4-5, 24-25, and 31-32 under 35 U.S.C. 112(b), Applicant’s amendment has been fully considered and is sufficient. Therefore, the rejection has been withdrawn. Regarding the rejection of claims 1-3, 21-23, and 28-30 under 35 U.S.C. 102 and claims 4-7, 24-27, and 31-33 under 35 U.S.C. 103, Applicant’s amendment and arguments have been fully considered. Applicants argue that Bean fails to teach the newly added claimed features. Examiner agrees to the extent that Bean fails to teach comparing at least one of densities or covariances of the test representative model and the corresponding representative model. Therefore, the rejection has been withdrawn. However, new grounds of rejection are made in view of the newly discovered references. As to any arguments not specifically addressed, they are the same as those discussed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 21-23, and 28-30 are rejected under 35 U.S.C. 103 as being unpatentable over Bean et al. (US 2021/0216632 A1) in view of Sui et al. (US 2023/0076346 A1). As to claim 1, Bean teaches a method, comprising: storing, in a device management system, (i) an identifier associated with a genuine device [trusted system 202 and its components] that has been authenticated as being genuine and (ii) a first sensor output generated by a first hardware sensor of the genuine device in response to a first sensor output [power trace data of the trusted system that is the output of the hardware sensors 204 in response to input vectors 114 is compiled into system templates 116] (par. [0022]); receiving, at a second hardware sensor of a test device [remote system 106 that is being tested], a second sensor input that corresponds to the first sensor input [challenge 206 that is analogous to input vectors 114 in that it is designed to elicit a particular response from the hardware sensors of the remote system] (par. [0018], [0023]-[0024]), the second hardware sensor corresponding to the first hardware sensor [trusted system 202 and the remote system 106 share hardware-based and software-based similarities; based on these similarities, if the remote system is uncontaminated, the remote system should then operate or behave identically to the trusted system, given any set of input vectors 114 or other stimuli such as challenges 206] (par [0021]); identifying a second sensor output generated by the second hardware sensor based on the second sensor input [collecting real-time sensor response date 208 from the test system 106] (par. [0023]-[0024]); comparing a test representative model associated with the second sensor output [real-time remote system response data] with a corresponding representative model associated with the first sensor output [system templates corresponding to the issued challenges] [if the received sensor response data 208 matches or fits the corresponding system template 116, the processors 102 verify that the remote system is a trusted system] (par. [0024]); and determining, based at least in part on the comparing, that the test device includes a counterfeit component based at least in part on the second sensor output being different than the first sensor output [if discorrelations or deviations of the received sensor response data are detected, the processors determine that the remote system has an anomaly, which may be a hardware-based Trojan or other like malware element present or an element absent as compared to the verified system 202] (par. [0021], [0024], [0031]). Bean fails to teach that the comparing comprises comparing at least one of densities or covariances of the test representative model and the corresponding representative model. Sui is directed to a 2-dimensionality detection method for industrial control system attacks (abstract). In particular, Sui teaches comparing at least one of densities or covariances of the test representative model and the corresponding representative model [function relationships of the system at this moment are compared with the function relationships recorded in the system health data model, the current type of probability density distribution is compared with the types of the probability density distribution stored in the health data model; covariances of the data of each sensor are counted, and compared with the covariances stored in the health data model] (par. [0024]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Bean by comparing at least one of densities or covariances of the test representative model and the corresponding representative model in order to provide multiple levels of attack detection (par. [0022]-[0024] in Sui). As to claim 2, Bean teaches that the second hardware sensor includes at least one of a voltage sensor detecting a voltage input, a current sensor detecting a current input, a temperature sensor detecting a temperature input, a fan speed sensor detecting a fan speed input, or a power sensor detecting a power input (par. [0018]), wherein the at least one of the voltage sensor, the current sensor, the temperature sensor, the fan speed sensor, or the power sensor are soldered on a printed circuit board (PCB) within the test device (par. [0017]-[0018], Fig. 1), and wherein the second sensor output data is at least one of an output of the voltage sensor, an output of the current sensor, an output of the temperature sensor, an output of the fan speed sensor, or an output of the power sensor, the telemetry data being used as input data for the ML model to determine whether the test device is counterfeit (par. [0018]). As to claim 3, Bean teaches that the second sensor output data is generated at run-time of the test device [remote system 106 must be functioning to produce a real-time response 208 to input vectors 114/challenge 206] (par. [0024]). As to claim 21, Bean in view of Sui teaches a system, comprising: one or more processors; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors (claim 1 of Bean), cause the one or more processors to perform operations comprising the method steps, as discussed per corresponding method claim 1 above. As to claims 22-23, Bean teaches all the elements, as discussed per corresponding method claims 2-3 above. As to claim 28, Bean in view of Sui teaches a distributed application system hosting an application service, the distributed application system comprising: one or more processors; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors (claim 1 of Bean), cause the one or more processors to perform operations comprising the method steps, as discussed per corresponding method claim 1 above. As to claims 29-30, Bean teaches all the elements, as discussed per corresponding method claims 2-3 above. Claims 4-7, 24-27, and 31-33 are rejected under 35 U.S.C. 103 as being unpatentable over Bean et al. in view of Sui et al. and in further view of Yuan et al. (US 2023/0281186 A1). As to claims 4, 24, and 31, Bean in view of Sui teaches all the elements except analyzing, by a specific model, the second sensor output to generate N-dimensional data, with N being greater than or equal to 2; converting the N-dimensional data to two-dimensional (2D) data; and outputting the test representative model as a scatter plot of the 2D data. Yuan is directed to anomaly detection and correction for categorical sensor data (abstract). In particular, Yuan teaches analyzing, by a specific model, a sensor output to generate N-dimensional data, with N being greater than or equal to 2; converting the N-dimensional data to two-dimensional (2D) data; and outputting a test representative model as a scatter plot of the 2D data [converting the categorical data to histograms] (Figs. 4-5, par. [0030]-[0031], [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Bean in view of Sui by analyzing, by a specific model, the second sensor output to generate N-dimensional data, with N being greater than or equal to 2; converting the N-dimensional data to two-dimensional (2D) data; and outputting the test representative model as a scatter plot of the 2D data, in order to more efficiently detect anomalies in the sensor output data (par. [0031] in Yuan). As to claims 5, 25, and 32, Bean in view of Sui teaches all the elements except that the specific model includes at least one of a uniform manifold approximation and projection (UMAP) model, a t-distributed stochastic neighbor embedding (t-SNE) model, a rank metrics model, an auto-encoding model, or a principal component analysis (PCA) model. Yuan is directed to anomaly detection and correction for categorical sensor data (abstract). In particular, Yuan teaches that the specific model includes at least one of a uniform manifold approximation and projection (UMAP) model, a t-distributed stochastic neighbor embedding (t-SNE) model, a rank metrics model, an auto-encoding model, or a principal component analysis (PCA) model (par. [0029], [0036]-[0041]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Bean in view of Sui by having the specific model include at least one of a uniform manifold approximation and projection (UMAP) model, a t-distributed stochastic neighbor embedding (t-SNE) model, a rank metrics model, an auto-encoding model, or a principal component analysis (PCA) model in order to determine normal ranges and thresholds for anomalies (par. [0036] in Yuan). As to claims 6, 26, and 33, Bean in view of Sui teaches all the elements except transmitting, to a computing device connected to the test device, an authentication response comprising an alert notification that the test device includes the counterfeit component. Yuan is directed to anomaly detection and correction for categorical sensor data (abstract). In particular, Yuan teaches transmitting, to a computing device connected to the test device, an authentication response comprising an alert notification that the test device includes the anomaly (par. [0032], [0052]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Bean in view of Sui by transmitting, to a computing device connected to the test device, an authentication response comprising an alert notification that the test device includes the counterfeit component, as detected by Bean, in order to allow a human operator to better identify the cause of the problem (par. [0052] in Yuan). As to claims 7 and 27, Bean in view of Sui teaches all the elements except causing presentation of a test result notification by a display of an external device, the test result notification including an indication that the test device includes the counterfeit component. Yuan is directed to anomaly detection and correction for categorical sensor data (abstract). In particular, Yuan teaches causing presentation of a test result notification by a display of an external device, the test result notification including an indication that the test device includes a counterfeit component (par. [0032], [0052]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Bean in view of Sui by causing presentation of a test result notification by a display of an external device, the test result notification including an indication that the test device includes the counterfeit component, as detected by Bean, in order to allow a human operator to better identify the cause of the problem (par. [0052] in Yuan). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLEG SURVILLO whose telephone number is (571)272-9691. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ario Etienne can be reached at 571-272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLEG SURVILLO/Primary Examiner, Art Unit 2457
Read full office action

Prosecution Timeline

Mar 21, 2023
Application Filed
Jun 09, 2025
Non-Final Rejection — §103
Jun 13, 2025
Interview Requested
Jun 26, 2025
Applicant Interview (Telephonic)
Jun 27, 2025
Examiner Interview Summary
Jun 27, 2025
Response Filed
Oct 04, 2025
Final Rejection — §103
Jan 29, 2026
Request for Continued Examination
Feb 01, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591647
Device Starting System and Method
2y 5m to grant Granted Mar 31, 2026
Patent 12582871
ACTIVITY TRACKING FOR MULTIPLE USERS ON A DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572648
COMPUTER-IMPLEMENTED AUTOMATIC SECURITY METHODS AND SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12574427
AUDIO PLAYING METHOD, APPARATUS AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12574430
DISTRIBUTED EXTENDED REALITY (XR) COMPUTING OPTIMIZATION AT CLIENT DEVICE IN COMMUNICATION WITH EDGE NODE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.0%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 561 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month