Prosecution Insights
Last updated: April 19, 2026
Application No. 17/954,906

SWARM LEARNING, PRIVACY PRESERVING, DE-CENTRALIZED IID DRIFT CONTROL

Final Rejection §103
Filed
Sep 28, 2022
Examiner
WONG, WILLIAM
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Hewlett Packard Enterprise Development LP
OA Round
2 (Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
120 granted / 397 resolved
-24.8% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
33 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to communications filed on 12/17/2025. Claim 17 has been canceled. Claim 21 has been added. Claims 1-16 and 18-21 are pending and have been examined. Response to Arguments Previous objections to the specification have been withdrawn in view of amendments. Previous objections to the claims have been withdrawn in view of amendments. Previous rejections under 35 USC 112 have been withdrawn in view of amendments. Applicant’s arguments with respect to the newly amended features have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See Kang et al. (US 20230090731 A1) below. With respect to new claim 21, Yagnik describes “partitioning of non-IID data creates training data sets and test data sets with non-IID data… provides a mechanism to select different training sets with approximately an independent identical distribution” (e.g. in column 17 lines 26-41), i.e. checks what data conforms to IID and what does not. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 6, 8-9, 13-16, 18, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Manamohan et al. (US 20210234668 A1) in view of Shishido (US 20220121989 A1), Yagnik (US 7827123 B1), and Kang et al. (US 20230090731 A1). As per independent claim 1, Manamohan teaches an edge node operating in a distributed swarm learning blockchain network (e.g. in paragraph 26 and 78, “swarm blockchain network… distributed amongst nodes”), comprising: at least one processor and a memory unit operatively connected to the at least one processor, the memory unit including instructions that, when executed, cause the at least one processor (e.g. in paragraphs 54-55, “one or more processors 50 (also interchangeably referred to herein as processors 50, processor(s) 50, or processor 50 for convenience), one or more storage devices 70, and/or other components”) to: receive a smart contract that includes a definition of conforming data (e.g. in paragraphs 45 and 47, “The smart contracts 44 may include rules, which each edge node 10 follow… ensure recurring policy enforcement and compliances”); execute the smart contract (e.g. in paragraphs 31 and 45, “implemented as smart contracts”); receive one or more batches of training data for training a machine learning model (e.g. in paragraph 74, “local model training occurs, where each node proceeds to train a local copy of the global or common model in an iterative fashion over multiple rounds that can be referred to as epochs. During each epoch, each node trains its local model using one or more data batches for some given number of iterations”); check whether conforms to the definition of conforming data included in the executed smart contract (e.g. in paragraph 62, “ensure compliance with the current state of the system… the smart contracts 44 may encode rules that specify what events trigger such checking… that can trigger compliance evaluation”); train a local version of the machine learning model at the edge node using batches of training data (e.g. in paragraph 74, “local model training occurs, where each node proceeds to train a local copy of the global or common model in an iterative fashion over multiple rounds that can be referred to as epochs”); transmit parameters derived from the training of the local version of the machine learning model to a leader node (e.g. in paragraph 76, “merge leader may then merge the downloaded parameter files (from each swarm learning network node)”); receive, from the leader node, merged parameters derived from a global version of the machine learning model (e.g. in paragraph 76, “each node may obtain the merged parameters (represented in the new file) from the merge leader via the swarm API”); and apply the merged parameters to the local version of the machine learning model at the edge node to update the local version of the machine learning model (e.g. in paragraph 76, “each node may update its local version of the common model with the merged parameters”), but does not specifically teach check whether each batch of training data conforms to the definition of conforming data included in the executed smart contract, to determine conforming batches of training data and non-conforming batches of training data, tag and isolate the non-conforming batches of training data to keep the non-conforming batches of training data from being used in training the machine learning model, and training using the conforming batches of training data, wherein the conforming batches of training data are independently and identically distributed (IID) data and wherein the global version of the machine learning model is isolated from the non-conforming batches of training data. However, Shishido teaches checking whether each/a batch of training data conforms to a definition of conforming data to determine conforming batches of training data and non-conforming batches of training data (e.g. in paragraphs 26 and 36, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… training data 101 is processed according to a certain rule… rule corresponds to the evasion attack”), tag and isolate the non-conforming batches of training data to keep the non-conforming batches of training data from being used in training a machine learning model (e.g. in paragraphs 26, 32, 36, and 112, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… deleting a part of the acquired training data, for example, according to a rule corresponding to the evasion attack”), and using the conforming batches of training data (e.g. in paragraph 112, “deleting a part of the acquired training data, for example, according to a rule corresponding to the evasion attack”, i.e. only using normal/conforming data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Manamohan to include the teachings of Shishido because one of ordinary skill in the art would have recognized the benefit of tailoring data used for training and/or improving model performance, but the combination does not specifically teach wherein the conforming batches of training data are independently and identically distributed (IID) data and wherein the global version of the machine learning model is isolated from the non-conforming batches of training data. However, Yagnik teaches conforming batches of training data are independently and identically distributed (IID) data (e.g. in column 2 lines 49-56 and column 17 lines 26-41, “provide the selection of a training data set from a plurality of sets of stored real-world event data for training at least one computer-implemented classifier so that the selected training data set has a distribution approximating an independent identical distribution… partitioning of non-IID data creates training data sets and test data sets with non-IID data… provides a mechanism to select different training sets with approximately an independent identical distribution”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Yagnik because one of ordinary skill in the art would have recognized the benefit of improving model performance, but does not specifically teach wherein the global version of the machine learning model is isolated from the non-conforming batches of training data. However, Kang teaches a global version of a machine learning model being isolated from non-conforming training data (e.g. in paragraph 5, “in order to remove some clients with such extreme Non-IID data, it is necessary to identify the data of the client, which causes a problem… federated learning of an artificial intelligence model which train the global model to have a high image classification accuracy by removing some clients including extreme Non-IID data”, i.e. the global model is isolated from non-conforming training data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Kang because one of ordinary skill in the art would have recognized the benefit of avoiding problems. As per claim 3, the rejection of claim 1 is incorporated and the combination further teaches discard the non-conforming batches of training data (e.g. Shishido, in paragraphs 32, 36, and 112, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… deleting a part of the acquired training data, for example, according to a rule corresponding to the evasion attack”). As per claim 4, the rejection of claim 1 is incorporated and the combination further teaches share with other nodes in the network the parameters derived from training the local version of the machine learning model using the conforming batches of training data (e.g. Manamohan, in paragraph 74, “During each epoch, each node trains its local model using one or more data batches for some given number of iterations… Each node may signal the other nodes that it is ready to share its parameters”; Shishido, in paragraphs 26, 32, 36, and 112, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… deleting a part of the acquired training data, for example, according to a rule corresponding to the evasion attack”). Claims 6 and 8-9 are the method claims corresponding to edge node claims 1 and 3-4, and are rejected under the same reasons set forth. Claims 13-15 and 18 are the training node claims corresponding to edge node claims 1 and 4, and are rejected under the same reasons set forth. As per claim 16, the rejection of claim 13 is incorporated and the combination further teaches wherein the memory unit includes instructions that when executed further cause the at least one processor to list the non-conforming batches of training data in a log or discard the non-conforming batches of training data (e.g. Shishido, in paragraphs 32, 36, and 112, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… deleting a part of the acquired training data, for example, according to a rule corresponding to the evasion attack”). As per claim 21, the rejection of claim 1 is incorporated and the combination further teaches wherein the definition of conforming data comprises a definition of lID, and wherein checking whether each batch of training data conforms to the definition of conforming data comprises checking whether each batch of training data conforms to the definition of lID data to determine the conforming batches of training data and the non-conforming batches of training data (e.g. Manamohan, in paragraph 74, “trains its local model using one or more data batches”; Yagnik, in column 2 lines 49-56 and column 17 lines 26-41, “provide the selection of a training data set from a plurality of sets of stored real-world event data for training at least one computer-implemented classifier so that the selected training data set has a distribution approximating an independent identical distribution… partitioning of non-IID data creates training data sets and test data sets with non-IID data… provides a mechanism to select different training sets with approximately an independent identical distribution”, i.e. checks what data conforms to IID and what does not). Claims 2, 5, 7, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Manamohan et al. (US 20210234668 A1) in view of Shishido (US 20220121989 A1), Yagnik (US 7827123 B1), and Kang et al. (US 20230090731 A1) and further in view of Shaw et al. (US 20220284064 A1). As per claim 2, the rejection of claim 1 is incorporated and the combination further teaches batches of training data including the non-conforming batches of training data (e.g. Shishido, in paragraphs 26 and 36, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… training data 101 is processed according to a certain rule… rule corresponds to the evasion attack”), but the combination does not specifically teach list the non-conforming batches of training data in a log. However, Shaw teaches listing batches of training data training data in a log (e.g. in paragraphs 49-50, “the training log includes a searchable, sortable, and filterable list of historical training data relating to one or more different component types”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Shaw because one of ordinary skill in the art would have recognized the benefit of allowing review of data. As per claim 5, the rejection of claim 1 is incorporated and the combination further teaches batches of training data including the non-conforming batches of training data (e.g. Shishido, in paragraphs 26 and 36, “normal query data is correctly determined as normal… invalid query data created by an evasion attack is correctly determined as abnormal is referred to as an “attack resistance”… training data 101 is processed according to a certain rule… rule corresponds to the evasion attack”), but the combination does not specifically teach correct the non-conforming batches of training data and input corrected batches of training data into the check step at a later time. However, Shaw teaches correcting non-conforming batches of training data and inputting corrected batches of training data into a check step at a later time (e.g. in paragraphs 32-35, “an override action corresponding to one or more training configuration resources generated in response to training feedback received… the response override generator 116 can process an override action corresponding to feedback received from one or more end-user systems… use the training configuration resources generated for the respective training events (e.g., training examples) to train one or more models of the machine learning system… improved”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Shaw because one of ordinary skill in the art would have recognized the benefit of allowing feedback to be provided. Claims 7 and 12 are the method claims corresponding to edge node claims 2 and 5, and are rejected under the same reasons set forth. Claim 20 is the training node claim corresponding to edge node claim 5, and is rejected under the same reasons set forth. Claims 10-11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Manamohan et al. (US 20210234668 A1) in view of Shishido (US 20220121989 A1), Yagnik (US 7827123 B1), and Kang et al. (US 20230090731 A1) and further in view of McMahan et al. (US 20170109322 A1). As per claim 10, the rejection of claim 6 is incorporated and the combination further teaches evaluating the updated local version of the machine learning model to determine a local validation value, and transmitting the local validation value to the leader node (e.g. Manamohan, in paragraph 77, “each of the nodes evaluate the model with the updated parameter values using their local data to calculate various validation metrics… In the interim, the merge leader may keep checking for an update complete signal from each node. When it discovers that all merge participants have signaled completion, the merge leader merges the local validation metric numbers to calculate global metric numbers”), but does not specifically teach loss. However, McMahan teaches determining a value associated with loss (e.g. in paragraphs 14-17, 37 and 41 and claim 16, “the local update can be a gradient vector… the local gradient can be determined for a loss function… a loss function associated with a global machine learning model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of McMahan because one of ordinary skill in the art would have recognized the benefit of facilitating learning. As per claim 11, the rejection of claim 10 is incorporated and the combination further teaches receiving from the leader node a global validation loss value determined based on the local validation loss value transmitted by the edge node (e.g. Manamohan, in paragraphs 77 and 91, “the merge leader merges the local validation metric numbers to calculate global metric numbers… reflects the global state of the parameter/merging/swarm learning status amongst nodes… each node is aware of the full state of the swarm learning network from their local copy of the distributed ledger”; McMahan, in paragraphs 14-17, 37 and 41 and claim 16, “the local update can be a gradient vector… global model update can be determined by aggregating each local update… a global objective via a loss function… the local gradient can be determined for a loss function… a loss function associated with a global machine learning model”). Claim 19 is the training node claim corresponding to edge node claims 10-11, and is rejected under the same reasons set forth. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, Lock et al. (US 20070112824 A1) teaches “When a rule is added at 40, the positive examples it covers are removed from the training data by the computer 1 at 42, and remaining or unremoved positive and negative examples form a modified training data set for a subsequent iteration” (e.g. in paragraph 163). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.W/Examiner, Art Unit 2144 01/09/2026 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Sep 28, 2022
Application Filed
Sep 26, 2025
Non-Final Rejection — §103
Nov 26, 2025
Interview Requested
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 05, 2025
Examiner Interview Summary
Dec 17, 2025
Response Filed
Jan 10, 2026
Final Rejection — §103
Mar 27, 2026
Interview Requested
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572252
CONTROLLING A 2D SCREEN INTERFACE APPLICATION IN A MIXED REALITY APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12530707
CUSTOMER EFFORT EVALUATION IN A CONTACT CENTER SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12511846
XR DEVICE-BASED TOOL FOR CROSS-PLATFORM CONTENT CREATION AND DISPLAY
2y 5m to grant Granted Dec 30, 2025
Patent 12504944
METHODS AND USER INTERFACES FOR SHARING AUDIO
2y 5m to grant Granted Dec 23, 2025
Patent 12423561
METHOD AND APPARATUS FOR KEEPING STATISTICAL INFERENCE ACCURACY WITH 8-BIT WINOGRAD CONVOLUTION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
57%
With Interview (+26.9%)
4y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month