Prosecution Insights
Last updated: April 19, 2026
Application No. 17/823,555

SUPERVISED ANOMALY DETECTION IN FEDERATED LEARNING

Final Rejection §103
Filed
Aug 31, 2022
Examiner
SHEHNI, GHAZAL B
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
International Business Machines Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
932 granted / 1068 resolved
+29.3% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
1095
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1068 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a final office action in response to communications received 01/12/2026. Claims 1, 3-4, 7-8, 10-11, 14-15, 17-18, 20 have been amended. Claims 2, 9, 16 are cancelled. Therefore, claims 1, 3-8, 10-15, 17-20 are pending and addressed below. Response to Amendment Applicant’s amendments and response to the claims are sufficient to overcome the 35 USC 101 rejections set forth in the previous office action. Response to Arguments Applicant’s arguments filed 01/12/2026 have been fully considered but they are not persuasive. Applicant argues that (1) Briliauskas does not disclose…model updates sent from the perspective clients. In response to argument (1), Examiner respectfully disagrees. Briliauskas discloses system receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…see col.15 lines 23-47. Examiner maintains that Briliauskas does disclose this limitation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 6, 7, 8, 9, 11, 13,14, 15, 16, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Briliauskas et al (Pat. No. US 11593485) in view of Wiebe et al (Pub. No. US 2018/0349605). As per claim 1, Briliauskas discloses a computer-implemented method for supervised anomaly detection in federated learning, the method comprising: generating, by a central server in a federated learning system a training dataset, the training dataset including malicious data samples and benign data samples, the malicious data samples generated using poisoning attacks (generating a predictive model for malware detection using federated learning…transmitting, to each of a plurality of remote devices, a copy of the predictive model, where the predictive model is configured to predict whether a file is malicious…generating a federated model by training the predictive model based on the model parameters received from each of the plurality of remote devices…labeling each file of a training data set as either malicious or clean…see col.1 lines 41-53, col.4 lines 44-46); training, by the central server, update-generating models on the malicious data samples and the benign data samples in the training dataset; generating, by the central server, benign model updates and malicious model updates, through training the update-generating models (…system receives metadata for the initial model trained by each of client devices…metadata may include, for example, an indication of a version (e.g., a version number) for the model trained by respective ones of client devices…the version of each model trained by client devices can then be compared to a current version of the model maintained by model generator (i.e., the initial model) to determine whether any of client devices had trained an old or out-of-date version of the initial model…system also receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…newly-identified malicious files may be files that were determined to be malicious on any of client devices…likewise, client devices may also send hashes and/or metadata of known clean files, to further improve malware properties database…see col.15 lines 23-47); and training, by the central server, an anomaly detector on the malicious model updates and the benign model updates (see col.17 line 49-col.18 line 20); deploying, by the server, the anomaly detector to the federated learning system, for supervised anomaly detection in the federated learning system (…transmit…malware detection model…to client device…see col.10 lines 63-65…generate a malware detection model via federated learning…see col.20 line 10…the malware detection model is a predictive model that classifies files…as either malicious or “clean”/non-malicious…the malware detection model is a supervised training model…see col.20 lines 34-38, col.23 lines 15-28). receiving, by the central server, model updates sent from respective clients in the federated learning system (…system also receives metadata for the initial model trained by each of client devices…metadata may include, for example, an indication of a version (e.g., a version number) for the model trained by respective ones of client devices…system receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…newly-identified malicious files may be files that were determined to be malicious on any of client devices…likewise, client devices may also send hashes and/or metadata of known clean files…col.15 lines 23-47); running, by the central server, the anomaly detector to classify malicious ones and benign ones in the model updates sent from the respective clients (see col.15 line 48-col.16 line 11); flagging, by the central server, the malicious ones in the model updates sent from the respective clients; and excluding, by the central server, the malicious ones from aggregating the model updates sent from the respective clients (…system may compare the version (e.g., version number) of the model trained by each of client devices (e.g., determined based on model metadata) to a current version of the model maintained by system (e.g., the initial instantiation of the model)…if the version of the model trained by each of client devices matches the current version of the model, then the model parameters received from each of client devices may be considered valid…if it is determined that one or more of the client devices had trained an out-of-date model, the parameters transmitted by the offending client device(s) may be considered invalid…the invalid parameters are flagged, ignored, or deleted…see col. 22 lines 38-51). Briliauskas does not explicitly disclose the malicious data samples generated using poisoning attacks. However Wiebe discloses the malicious data samples generated using poisoning attacks (…a poisoning attack seeks to control a classifier by introducing malicious training data into the training set so that the adversary can force an incorrect classification for a subset of test vectors…see par. 32). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Wiebe in Briliauskas for including the above limitations because one ordinary skill in the art would recognize it would further prevent powerful adversary from learning information contributed to the clustering algorithm, therefore enhancing the security further…see Wiebe, par. 3-4. As per claim 8, Briliauskas discloses a computer program product for supervised anomaly detection, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by one or more processors, the program instructions executable to: generate, by a central server in a federated learning system a training dataset, the training dataset including malicious data samples and benign data samples, the malicious data samples generated using poisoning attacks (generating a predictive model for malware detection using federated learning…transmitting, to each of a plurality of remote devices, a copy of the predictive model, where the predictive model is configured to predict whether a file is malicious…generating a federated model by training the predictive model based on the model parameters received from each of the plurality of remote devices…labeling each file of a training data set as either malicious or clean…see col.1 lines 41-53, col.4 lines 44-46); train, by the central server, update-generating models on the malicious data samples and the benign data samples in the training dataset; generate, by the central server, benign model updates and malicious model updates, through training the update-generating models (…system receives metadata for the initial model trained by each of client devices…metadata may include, for example, an indication of a version (e.g., a version number) for the model trained by respective ones of client devices…the version of each model trained by client devices can then be compared to a current version of the model maintained by model generator (i.e., the initial model) to determine whether any of client devices had trained an old or out-of-date version of the initial model…system also receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…newly-identified malicious files may be files that were determined to be malicious on any of client devices…likewise, client devices may also send hashes and/or metadata of known clean files, to further improve malware properties database…see col.15 lines 23-47); and train, by the central server, an anomaly detector on the malicious model updates and the benign model updates (see col.17 line 49-col.18 line 20); deploying, by the server, the anomaly detector to the federated learning system, for supervised anomaly detection in the federated learning system (…transmit…malware detection model…to client device…see col.10 lines 63-65…generate a malware detection model via federated learning…see col.20 line 10…the malware detection model is a predictive model that classifies files…as either malicious or “clean”/non-malicious…the malware detection model is a supervised training model…see col.20 lines 34-38, col.23 lines 15-28). receive, by the central server, model updates sent from respective clients in the federated learning system (…system also receives metadata for the initial model trained by each of client devices…metadata may include, for example, an indication of a version (e.g., a version number) for the model trained by respective ones of client devices…system receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…newly-identified malicious files may be files that were determined to be malicious on any of client devices…likewise, client devices may also send hashes and/or metadata of known clean files…col.15 lines 23-47); run, by the central server, the anomaly detector to classify malicious ones and benign ones in the model updates sent from the respective clients (see col.15 line 48-col.16 line 11); flag, by the central server, the malicious ones in the model updates sent from the respective clients; and exclude, by the central server, the malicious ones from aggregating the model updates sent from the respective clients (…system may compare the version (e.g., version number) of the model trained by each of client devices (e.g., determined based on model metadata) to a current version of the model maintained by system (e.g., the initial instantiation of the model)…if the version of the model trained by each of client devices matches the current version of the model, then the model parameters received from each of client devices may be considered valid…if it is determined that one or more of the client devices had trained an out-of-date model, the parameters transmitted by the offending client device(s) may be considered invalid…the invalid parameters are flagged, ignored, or deleted…see col. 22 lines 38-51). Briliauskas does not explicitly disclose the malicious data samples generated using poisoning attacks. However Wiebe discloses the malicious data samples generated using poisoning attacks (…a poisoning attack seeks to control a classifier by introducing malicious training data into the training set so that the adversary can force an incorrect classification for a subset of test vectors…see par. 32). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Wiebe in Briliauskas for including the above limitations because one ordinary skill in the art would recognize it would further prevent powerful adversary from learning information contributed to the clustering algorithm, therefore enhancing the security further…see Wiebe, par. 3-4. As per claim 15, Briliauskas discloses a computer system for supervised anomaly detection, the computer system comprising one or more processors, one or more computer readable tangible storage devices, and program instructions stored on at least one of the one or more computer readable tangible storage devices for execution by at least one of the one or more processors, the program instructions executable to: generate, by a central server in a federated learning system a training dataset, the training dataset including malicious data samples and benign data samples, the malicious data samples generated using poisoning attacks (generating a predictive model for malware detection using federated learning…transmitting, to each of a plurality of remote devices, a copy of the predictive model, where the predictive model is configured to predict whether a file is malicious…generating a federated model by training the predictive model based on the model parameters received from each of the plurality of remote devices…labeling each file of a training data set as either malicious or clean…see col.1 lines 41-53, col.4 lines 44-46); train, by the central server, update-generating models on the malicious data samples and the benign data samples in the training dataset; generate, by the central server, benign model updates and malicious model updates, through training the update-generating models (…system receives metadata for the initial model trained by each of client devices…metadata may include, for example, an indication of a version (e.g., a version number) for the model trained by respective ones of client devices…the version of each model trained by client devices can then be compared to a current version of the model maintained by model generator (i.e., the initial model) to determine whether any of client devices had trained an old or out-of-date version of the initial model…system also receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…newly-identified malicious files may be files that were determined to be malicious on any of client devices…likewise, client devices may also send hashes and/or metadata of known clean files, to further improve malware properties database…see col.15 lines 23-47); and train, by the central server, an anomaly detector on the malicious model updates and the benign model updates (see col.17 line 49-col.18 line 20); deploying, by the server, the anomaly detector to the federated learning system, for supervised anomaly detection in the federated learning system (…transmit…malware detection model…to client device…see col.10 lines 63-65…generate a malware detection model via federated learning…see col.20 line 10…the malware detection model is a predictive model that classifies files…as either malicious or “clean”/non-malicious…the malware detection model is a supervised training model…see col.20 lines 34-38, col.23 lines 15-28). receive, by the central server, model updates sent from respective clients in the federated learning system (…system also receives metadata for the initial model trained by each of client devices…metadata may include, for example, an indication of a version (e.g., a version number) for the model trained by respective ones of client devices…system receives an updated version of malware properties database or, at least, metadata and/or a hash of one or more newly-identified malicious files, from each of client devices…newly-identified malicious files may be files that were determined to be malicious on any of client devices…likewise, client devices may also send hashes and/or metadata of known clean files…col.15 lines 23-47); run, by the central server, the anomaly detector to classify malicious ones and benign ones in the model updates sent from the respective clients (see col.15 line 48-col.16 line 11); flag, by the central server, the malicious ones in the model updates sent from the respective clients; and exclude, by the central server, the malicious ones from aggregating the model updates sent from the respective clients (…system may compare the version (e.g., version number) of the model trained by each of client devices (e.g., determined based on model metadata) to a current version of the model maintained by system (e.g., the initial instantiation of the model)…if the version of the model trained by each of client devices matches the current version of the model, then the model parameters received from each of client devices may be considered valid…if it is determined that one or more of the client devices had trained an out-of-date model, the parameters transmitted by the offending client device(s) may be considered invalid…the invalid parameters are flagged, ignored, or deleted…see col. 22 lines 38-51). Briliauskas does not explicitly disclose the malicious data samples generated using poisoning attacks. However Wiebe discloses the malicious data samples generated using poisoning attacks (…a poisoning attack seeks to control a classifier by introducing malicious training data into the training set so that the adversary can force an incorrect classification for a subset of test vectors…see par. 32). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Wiebe in Briliauskas for including the above limitations because one ordinary skill in the art would recognize it would further prevent powerful adversary from learning information contributed to the clustering algorithm, therefore enhancing the security further…see Wiebe, par. 3-4. As per claims 4, 11, 18, the combination of Briliauskas and Wiebe discloses wherein the anomaly detector is deployed on the central server (Briliauskas: see col.25 lines 7-9). As per claims 6, 13, the combination of Briliauskas and Wiebe discloses wherein the benign data samples is a small fraction of a training dataset of federated learning (Briliauskas: see col.10 lines 40-50). As per claims 3, 10, 17, the combination of Briliauskas and Wiebe discloses generating, by the central server, the benign model updates, through training the update-generating models on respective sets of the benign data samples; and generating, by the central server, the malicious model updates, through training the update- generating models on respective sets of the malicious data samples (Briliauskas: see col.20 lines 31-43, col.21 lines 17-21). As per claims 5, 12, 19, the combination of Briliauskas and Wiebe discloses wherein the update-generating models are initially trained locally by respective clients in the federated learning system and uploaded to the central server, and then the central server uses the malicious data samples and the benign data samples to train the update-generating models (Briliauskas: see col.15 lines 1-18). Claims 7, 14, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Briliauskas et al (Pat. No. US 11593485) in view of Wiebe et al (Pub. No. US 2018/0349605) as applied to claims 1, 8, 15 above, and further in view of Chen (Pub. No. US 2019/0080089). As per claims 7, 14, 20, the combination of Briliauskas and Wiebe does not explicitly disclose wherein the server constructs the malicious data samples by poisoning attacks which is constituted by poisoning patterns of different sizes or locations. However Chen discloses wherein the server constructs the malicious data samples by poisoning attacks which is constituted by poisoning patterns of different sizes or locations (…the machine learning system using the second subsystem or the third subsystem may provide resiliency against a large class of evasion attacks, which would otherwise weaken the machine learning system…the machine learning system may apply techniques of sparse representation or semi-supervised learning to represent or compress received signals to detect malware…evasion attacks such as data poisoning may attempt to corrupt training data see par. 21). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Chen in the combination of Briliauskas and Wiebe for including the above limitations because one ordinary skill in the art would recognize it would improve the machine learning system to identify the most relevant and uncontaminated training samples, defend the machine learning framework, and produce stable and superior classification results…see Chen, par. 21. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see PTO-form 892). The following Patents and Papers are cited to further show the state of the art at the time of Applicant’s invention with respect to supervised anomaly detection in federated learning. Jadav et al (Pub. No. US 2023/0421586); “Dynamically Federated Data Breach Detection”; -Teaches dynamically federated data breach detection for cyber resilience…see par. 10. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GHAZAL B SHEHNI whose telephone number is (571)270-7479. The examiner can normally be reached Mon-Fri 9am-5pm PCT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at 5712723951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GHAZAL B SHEHNI/Primary Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Aug 31, 2022
Application Filed
Oct 10, 2023
Response after Non-Final Action
Oct 08, 2025
Non-Final Rejection — §103
Jan 04, 2026
Interview Requested
Jan 12, 2026
Response Filed
Jan 13, 2026
Examiner Interview Summary
Jan 13, 2026
Applicant Interview (Telephonic)
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602479
MEASURING CONTAINERS
2y 5m to grant Granted Apr 14, 2026
Patent 12596810
AUTOMATED APPLICATION PROGRAMMING INTERFACE (API) TESTING
2y 5m to grant Granted Apr 07, 2026
Patent 12591682
AUTOMOTIVE SECURE BOOT WITH SHUTDOWN MEASURE
2y 5m to grant Granted Mar 31, 2026
Patent 12591660
DEVICE SECURITY MANAGER ARCHITECTURE FOR TRUSTED EXECUTION ENVIRONMENT INPUT/OUTPUT (TEE-IO) CAPABLE SYSTEM-ON-A-CHIP INTEGRATED DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12585741
PASSWORD PROMPT FOR SECURE CAMERA ACTIVATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.4%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1068 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month