Prosecution Insights
Last updated: April 19, 2026
Application No. 18/571,153

Machine Learning Process Detection

Non-Final OA §102§103
Filed
Dec 15, 2023
Examiner
HARRIS, CHRISTOPHER C
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
275 granted / 362 resolved
+18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
21 currently pending
Career history
383
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
38.4%
-1.6% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION Remarks This action is in response to communications filed on 10/21/2025, claim(s) 10-12 are withdraw per Applicant's request. Therefore, claims 1-9 and 13-15 are presently pending in the application and have been considered as follows. Election/Restrictions Applicant's arguments in the response dated 10/21/2025 have been found persuasive, as such claims 1-15 will examined on the merits. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/19/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 10 is rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by US 20100192222 to Stokes et al. (hereinafter “Stokes”) Claim 10 Stokes teaches a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed, cause a processor resource to [e.g. Stokes; Claim 20] : train, using malicious training source code and non-malicious training source code, a classifier to determine whether a machine learning process running on a computing device is malicious; [e.g. Stokes; Para. 0022, 0024, 0025, 0029 – Stokes discloses training a two-class classifier (e.g. malicious, non-malicious) to determine if a machine learning process (e.g. software) is malicious.] deploy the trained classifier on the computing device[e.g. Stokes; Para. 0026 – Stokes discloses downloading trained classifier weights (e.g. deploy, trained classifier) to the client computer] to: determine a first code running on the computing device is malicious; [e.g. Stokes; Para. 0022, 0024, 0025, 0029 – Stokes discloses utilizing a two-class classifier (e.g. malicious, non-malicious) to determine malicious code (e.g. first code).] determine a second code running on the computing device is not malicious; [e.g. Stokes; Para. 0022, 0024, 0025, 0029 – Stokes discloses utilizing a two-class classifier (e.g. malicious, non-malicious) to determine non-malicious code (e.g. second code).] and report the first code as malicious and the second code as not malicious to the computing device. [e.g. Stokes; Para. 0002, 0022, 0024, 0025, 0029 – Stokes discloses providing indication (e.g. report) of maliciousness (e.g. malicious or non-malicious).] Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over US 20100192222 to Stokes et al. (hereinafter “Stokes”) in view of US 20220172111 to Gao et al. (hereinafter “Gao”) Claim 11 While Stokes teaches the non-transitory memory resource of claim 10 Stokes fails to explicitly teach however, Gao teaches: wherein the processor resource is to embed the trained classifier into the computing device or a different computing device using an integrated development environment (IDE) package. [e.g. Gao; Para.0023, 0057 – Gao discloses integrated machine learning environment may be a web based, integrated development environment (IDE) for machine learning that can be used to build, train, deploy, and analyze machine learning models.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the features above in the invention as disclosed by Stokes in order to provide customers a seamless mechanism for full control of machine learning algorithms in a fully integrated environment. Claims 12 is rejected under 35 U.S.C. 103 as being unpatentable US 20100192222 to Stokes et al. (hereinafter “Stokes”) in view of US 20220172111 to Gao et al. (hereinafter “Gao”) and further in view of US 10817604 to Kimball et al. (hereinafter “Kimball”) Claim 12 While Stokes and Gao teaches the non-transitory memory resource of claim 10 Stokes fails to explicitly teach however, Kimball teaches: wherein the processor resource is to rank items in the first code and the second code based on a determined relevance to the malicious determination, the non-malicious determination, or both. [e.g. Kimball; Abstract, Col 3 Ln 64 – Col 4 Ln7, Col 9 Ln 37-46, Col 12 Ln 4-7, Col 14 Ln 21-29 – Kimball discloses methods and processes for distinguishing between malicious and non-malicious.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the features above in the invention as disclosed by Stokes and Gao with the advantage of prioritizing most severe threats for response. Allowable Subject Matter Claims 1-9 and 13-15 are allowed. Examiner' s Statement of Reasons for Allowance The following is an examiner' s statement of reasons for allowance: although the prior art of record (such as SETHUMADHAVAN et al. (US20160275289)) Disclosed are devices, systems, apparatus, methods, products, media and other implementations, including a method that includes obtaining current hardware performance data, including hardware performance counter data, for a hardware device executing a first process associated with pre-recorded hardware performance data representative of the first process' normal behavior, and determining whether a malicious process is affecting performance of the first process based on a determination of an extent of deviation of the obtained current hardware performance data corresponding to the first process from the pre-recorded hardware performance data representative of the normal behavior of the first process. (Abstract) none of the prior art, alone or in combination, teaches Independent Claim 1: “…run first process and collect a first subset of the machine learning dataset; run a second process and collect a second subset of the machine learning dataset; run a third process and collect a third subset of the machine learning dataset; and train and deploy a classifier using the machine learning dataset to determine whether a machine learning training process is running on a first computing device, and whether the machine learning training process is malicious or is not malicious...”. Independent Claim 13: “…un a first process and collect a first subset of the dataset; run a second process and collect a second subset of the dataset; and run a third process and collect a third subset of the dataset; train a first classifier using the dataset to determine whether a machine learning training process is running on a computing device; deploy the trained first classifier on the computing device; in response to determining the machine learning training process running on the computing device is malicious, deploy a second classifier trained using malicious training source code and non-malicious training source code, to scan code running on the computing device and determine what portion of the code is malicious...”. in view of other limitations of claim 1 and 13. Dependent claims are allowed as they depend from an allowable independent claim. The closest prior art made of record are: SETHUMADHAVAN et al. (US20160275289)) Disclosed are devices, systems, apparatus, methods, products, media and other implementations, including a method that includes obtaining current hardware performance data, including hardware performance counter data, for a hardware device executing a first process associated with pre-recorded hardware performance data representative of the first process' normal behavior, and determining whether a malicious process is affecting performance of the first process based on a determination of an extent of deviation of the obtained current hardware performance data corresponding to the first process from the pre-recorded hardware performance data representative of the normal behavior of the first process. (Abstract) SRIDHARA et al. (US 20140237595) The various aspects provide a system and methods implemented on the system for generating a behavior model on a server that includes features specific to a mobile computing device and the device's current state/configuration. In the various aspects, the mobile computing device may send information identifying itself, its features, and its current state to the server. In response, the server may generate a device-specific lean classifier model for the mobile computing device based on the device's information and state and may send the device-specific lean classifier model to the device for use in detecting malicious behavior. The various aspects may enhance overall security and performance on the mobile computing device by leveraging the superior computing power and resources of the server to generate a device-specific lean classifier model that enables the device to monitor features that are actually present on the device for malicious behavior. Udupi Raghavendra et al. (US 20220114260) Aspects of the present invention disclose a method, computer program product, and system for detecting a malicious process by a selected instance of an anti-malware system. The method includes one or more processors examining a process for indicators of compromise to the process. The method further includes one or more processors determining a categorization of the process based upon a result of the examination. In response to determining that the categorization of the process does not correspond to a known benevolent process and a known malicious process, the method further includes one or more processors executing the process in a secure enclave. The method further includes one or more processors collecting telemetry data from executing the process in the secure enclave. The method further includes one or more processors passing the collected telemetry data to a locally trained neural network system. Ma et al. (US 20220083659) Systems and methods include determining a plurality of features associated with executable files, wherein the plurality of features are each based on static properties in predefined structure of the executable files; obtaining training data that includes samples of benign executable files and malicious executable files; extracting the plurality of features from the training data; and utilizing the extracted plurality of features to train a machine learning model to detect malicious executable files. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER C HARRIS whose telephone number is (571)270-7841. The examiner can normally be reached Monday through Friday between 8:00 AM to 4:00 PM CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER C HARRIS/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602467
In-memory scan for threat detection with binary instrumentation backed generic unpacking, decryption, and deobfuscation
2y 5m to grant Granted Apr 14, 2026
Patent 12585746
AUTHENTICATION SYSTEM, USER DEVICE, AND KEY INFORMATION TRANSMISSION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12580915
SERVICE ACCESS METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12572668
DATA SECURITY USING REQUEST-SUPPLIED KEYS
2y 5m to grant Granted Mar 10, 2026
Patent 12561460
System And Method for Performing Security Analyses of Digital Assets
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+26.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month