Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/28/25 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 11-12, 16 and 28-36, 38-43 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 20200349434) in view of Meng (WO-2022116421) further in view of Hu (US 20080178294).
Regarding claim 1, 28, 40 and 41, Zhang teaches a first network entity ([0088] Network interface 1238 encompasses wire and/or wireless communication networks such as local-area networks (LAN), wide-area networks (WAN), cellular networks), comprising:
at least one processor; and at least one memory coupled with the at least one processor, the at least one processor configured to cause the first network entity to ([0008] “a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory”):
obtain information, the information associated training data for with a machine learning model ([0042] “the model development module 104 can be configured to receive and processes the training data samples 102 to develop and/or train an outlier detection model 114 to classify the respective unseen data samples 116 as either inlier data samples or outlier data samples prior to processing by the ML model. As noted above, inlier data samples correspond to data samples that are predicted to be inside the scope of the (correctly predicted) training data samples 102, and thus are considered confident or high confidence data samples. Likewise, outlier data samples correspond to data samples that are predicted to be outside the scope of the training data samples, and thus considered data samples without confidence or otherwise low confidence data samples);
output an indication that the information corresponding to the UE is considered untrusted based at least in part on the information corresponding to the UE in accordance with a predicted output of the machine learning model associated with a training procedure for the machine learning model that uses the information as the training data ([0042] “the model development module 104 can be configured to receive and processes the training data samples 102 to develop and/or train an outlier detection model 114 to classify the respective unseen data samples 116 as either inlier data samples or outlier data samples prior to processing by the ML model. As noted above, inlier data samples correspond to data samples that are predicted to be inside the scope of the (correctly predicted) training data samples 102, and thus are considered confident or high confidence data samples. Likewise, outlier data samples correspond to data samples that are predicted to be outside the scope of the training data samples, and thus considered data samples without confidence or otherwise low confidence data samples”; [0026] a “confident data sample,” a “high confidence data sample,” or the like, refers to a new or unseen data sample for which the ML model is predicted to process/evaluate with accurate performance (e.g., relative to a defined level of accuracy). In other words, application of the ML model to a “confident” or “high confidence” data sample is expected to result in an inference output by the ML model that is predicted to be accurate with a high degree of confidence (e.g., relative a threshold degree of confidence”).
However, Zhang information but does not explicitly teach that the information corresponds to UE. In an analogous art, Meng teaches obtain information corresponding to a user equipment (UE) ([0002] mobile devices to collaboratively train a global model in a distributed manner .. The mobile device only needs to send local model parameter updates for local raw data to the task publisher without uploading the raw data .. mobile devices to collaboratively train a global model in a distributed manner” [0011] “Based on the current reputation value, it is determined whether the candidate node is a credible node. If it is a credible node, the candidate node is selected as the working node of the current task publisher”). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Zhang’s teaching of information being used for model training to be that of a UEs as taught by Meng because the abundance of UE presence can create more samples and thus more accurate model.
As shown above, the combination of Zhang, Meng and Hu teaches UE being considered untrusted in accordance with the predicted output of the machine learning model (Zhang [0042] “the model development module 104 can be configured to receive and processes the training data samples 102 to develop and/or train an outlier detection model 114 to classify the respective unseen data samples 116 as either inlier data samples or outlier data samples prior to processing by the ML model.. Likewise, outlier data samples correspond to data samples that are predicted to be outside the scope of the training data samples, and thus considered data samples without confidence or otherwise low confidence data samples” Meng [0002] mobile devices to collaboratively train a global model). However, Zhang, Meng and Hu do not explicitly teach terminate a connection that corresponds to the UE or restrict wireless service for the UE based at least in part on the information that the UE being considered untrusted.
In an analogous art, Hu teaches “terminate a connection that corresponds to the UE or restrict wireless service for the UE based at least in part on the information that the UE being considered untrusted (Fig. 2 Step 214; Disable or restrict communications interfaces of sender/ue; [0054] If a mobile device 110 sending malicious communications is inside the service provider's network 102, intelligent agents 106 disable 216 outbound communications on that mobile device 110, or restrict 216 communications to stop the malicious activity without completely disabling the communications interfaces. For example, communications could be limited to allowing the mobile device 110 to reach network addresses associated with a service center 134 in order to download antivirus software. [0056] “If the sender of the malicious communications is within the service provider's network 102, intelligent agents 106 disable 216 outbound communications on that mobile device, or restrict 216 communications to stop the malicious activity.”) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Zhang, Meng and Hu’s teaching of identifying a UE as untrusted to also further include Hu's teaching to restrict wireless service for the untrusted UE in order to prevent the UE from causing further damage to the system.
Regarding claim 2 AND 29, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: perform outlier detection on the information corresponding to the UE, wherein the information corresponding to the UE is considered one of untrusted or trusted (“high confidence) based at least in part on the outlier detection (Zhang, [0026] a “confident data sample,” a “high confidence data sample,” or the like, refers to a new or unseen data sample for which the ML model is predicted to process/evaluate with accurate performance (e.g., relative to a defined level of accuracy). In other words, application of the ML model to a “confident” or “high confidence” data sample is expected to result in an inference output by the ML model that is predicted to be accurate with a high degree of confidence (e.g., relative a threshold degree of confidence). In various embodiments, confident data samples are defined as inliers identified by an outlier detector .. a ML model will perform better on the inlier data samples and more poorly on the outliers”).
Regarding claim 3, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: determine a change in performance of the machine learning model based at least in part on the training procedure for the machine learning model that uses the information corresponding to the UE as the training data, wherein the predicted output of the machine learning model satisfies a threshold for data corruption based at least in part on the change in performance ([0027] “While projecting the unseen data to a standard feature space, if data points are detected as inliers, then the ML model is expected to demonstrate consistent performance on those inliers, as those patterns have already been “seen” from the training dataset. In this regard, the ML model is expected to have consistent performance on data samples that are similar to the training data in the standard feature space.” [0030] It is noted that a data-driven model ML model learns common patterns from most samples within a group or class and that outlier samples are more often associated with wrong predictions.” [0042] “] “the model development module 104 can be configured to receive and processes the training data samples 102 to develop and/or train an outlier detection model 114 to classify the respective unseen data samples 116 as either inlier data samples or outlier data samples prior to processing by the ML model. ..inlier data samples correspond to data samples that are predicted to be inside the scope of the (correctly predicted) training data samples 102, and thus are considered confident or high confidence data samples. Likewise, outlier data samples correspond to data samples that are predicted to be outside the scope of the training data samples, and thus considered data samples without confidence or otherwise low confidence data samples.”).
Regarding claim 4 and 30, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: assign a trust score to the information corresponding to the UE in accordance with the predicted output of the machine learning model, wherein the indication that the information corresponding to the UE is considered one of untrusted or trusted comprises the trust score (Zhang, [0048] “the outlier detection model 114 can be configured to generate a confidence score for a data sample that represents a degree of confidence in the ML model to generate an accurate inference output/result based on the data sample. Defined confidence sore criteria can further be used to classify data samples as either outlier or inliers. For example, the confidence score criteria can consider data samples with confidence scores greater than a defined threshold as inliers, while those with confidence scores less than or equal to the defined threshold are outliers”).
Regarding claim 5 and 31, Zhang, Meng and Hu teach the first network entity of claim 4, wherein the trust score comprises a percentage value, or a quantized value (Zhang, [0048] The outlier detection model development component 108 further employs the extracted training feature vectors to train and develop the outlier detection model 114. For example, in some embodiments, the outlier detection model development component 108 can employ the training feature vectors to train and develop an outlier detection model 114 to classify a new data sample as either an outlier data sample or an inlier data sample using a defined outlier ratio (e.g., 0.1, 0.2, 0.3, 0.4, or 0.5).), or both.
Regarding claim 6 and 32, Zhang, Meng and Hu teach the first network entity of claim 4, wherein the trust score is associated with a time period for data collection from the UE (Zhang, [0022] “are the time periods in a collaboration, t<sub>i</sub> is the probability of successful data transmission in time period i, α<sub>i</sub> and β<sub>i</sub> represent the number of positive interactions and negative interactions in time period i, respectively, k and η represent the weight of positive interaction and the weight of negative interaction in the calculation, respectively, and z<sup>I-i</sup> is a time decay function; k≤η, k+η=1, z∈(0,1).”).
Regarding claim 7 and 34, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: obtain additional information corresponding to the UE, the additional information associated with the machine learning model; and classify the additional information corresponding to the UE as untrusted based at least in part on the information corresponding to the UE being considered untrusted (Zhang; [0061] “the reprocessing component 404 can facilitate sending the outlier/low confidence data samples for additional review and/or annotation using scrutinized annotation techniques (e.g., manual annotation by one or more expert entities). The newly annotated data samples can further be added to the training data set and used to further train and update the ML model 406”; Meng [0069] calculate the recommended reputation value according to formula (1); based on the recommended reputation value, determine whether the worker node is a trusted node. If it is a trusted node, according to the interaction situation, .. and upload the initial direct reputation value to the reputation blockchain for safekeeping”).
Regarding claim 8 and 35, Zhang, Meng and Hu teach the first network entity of claim 1, the processor configured to output the indication that the information corresponding to the UE is considered one of untrusted or trusted is configured to: output, for a database configured to store UE information for the data collection process, the indication that the information corresponding to the UE is considered one of untrusted or trusted (Zhang; [0038] “the outlier detection model 114, and/or the confidence evaluation module 118 can respectively be or include machine-executable components stored in memory” [0061] “the reprocessing component 404 can facilitate sending the outlier/low confidence data samples for additional review and/or annotation using scrutinized annotation techniques (e.g., manual annotation by one or more expert entities). The newly annotated data samples can further be added to the training data set and used to further train and update the ML model 406”; Meng [0069] calculate the recommended reputation value according to formula (1); based on the recommended reputation value, determine whether the worker node is a trusted node. If it is a trusted node, according to the interaction situation, .. and upload the initial direct reputation value to the reputation blockchain for safekeeping”).
Regarding claim 11 and 38, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: obtain a request for the information corresponding to the UE, wherein the indication that the information corresponding to the UE is considered one of untrusted or trusted is output in response to the request (Zhang, [0029] after the outlier detection model is developed and trained based on the training feature vectors, an unseen or new data samples can be individually projected onto a standard feature space to generate a feature vector for the data sample. This feature vector can further be passed through the trained outlier detector model to classify the data sample as either an outlier or an inlier. The ML model can be considered to be confident on predictions on inlier samples as those samples are expected to be similar to the training dataset. Likewise, the ML model can be expected to be non-confident on outlier data samples, and thus a prediction generated by the ML model based on outlier data sample can be considered unreliable.).
Regarding claim 12 and 39, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: output a configuration for the UE to refrain from the data collection process associated with the machine learning model based at least in part on the information corresponding to the UE being considered untrusted (Since the claim does not define what constitutes “data collection”, the examiner interprets “data collection” as “storing/keeping data”. Consequently, “refrain from data collection” is interpreted as “not keeping/not storing data”. Thus, Zhang’s teaching of “discarding data” meets the limitation of “refrain of data collection/storing”. Zhang teaches “[0061] the filtering component 402 can discard new or unseen data samples that are classified by the outlier detection component 122 as outliers. In this regard, the filtering component 402 can facilitate preventing application of the ML model 406 to outlier data samples that will likely result in low confidence inference predictions”).
Regarding claim 15 and 42, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: for output a parameter associated with the machine learning model to one or more UEs, wherein the UE is excluded from the one or more information based at least in part on the information being considered untrusted (Zhang [0061] the filtering component 402 can discard new or unseen data samples that are classified by the outlier detection component 122 as outliers. In this regard, the filtering component 402 can facilitate preventing application of the ML model 406 to outlier data samples that will likely result”). Thus applying Zhang’s teaching of discarding the untrusted data to Meng’s teaching of information from UE would result in discarding untrusted data from the UE which is equivalent to excluding the UEs.
Regarding claim 16 and 43, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the information corresponding to the UE comprises one or more measurement values for the UE, or an update to the machine learning model (Zhang “[0043] After the outlier detection model 114 has been developed and trained, the confidence evaluation module 118 can employ the outlier detection model 114 to evaluate unseen data samples 116 prior to processing by the ML model to classify the unseen data samples 116 as either inliers or outliers. As a result, the confidence evaluation module 118 can facilitate identifying new data samples that the ML model is expected to generate accurate and inaccurate inferences on prior to input into the ML model. The confidence evaluation module 118 can further facilitate improving the model performance by feeding the ML model the inlier/high confidence data samples and filtering out the outlier/low confidence data samples. Likewise, the confidence evaluation module 118 can facilitate identifying low confidence data samples included in the unseen data samples 116 and extracting these low confidence data samples for further model training and updating to expand the scope of the ML model.”)
Claim(s) 10 and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 20200349434) in view of Meng (WO2022116421) further in view of Yang (CN113158183).
Regarding claim 10 and 37, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: predict whether the information corrupted the information corresponding to the UE; and handle the information corresponding to the UE based at least in part on the prediction (Meng [0011] “Based on the current reputation value, it is determined whether the candidate node is a credible node. If it is a credible node, the candidate node is selected as the working node of the current task publisher”). Meng further teaches there is a distinction between intentional and unintentionally sending corrupted data. ([003] Unreliable workers may intentionally or unintentionally behave in ways that mislead the global model training of a federated learning task. Malicious worker nodes may launch a “poison” attack, sending malicious
parameter updates to influence the global model, causing the failure of the current collaborative learning mechanism. In addition, the highly dynamic mobile network environment indirectly leads to unexpected behaviors of some mobile devices. Due to high mobility or power constraints, mobile worker nodes may unintentionally update some low-quality parameters). However, Meng does not explicitly teach predicting an intentional of malicious behavior/corrupting data. In an analogous art, Yang teaches Meng does not explicitly teach predicting an intentional of malicious behavior/corrupting data (p. 8 [0026] application for detecting malicious behavior in a mobile terminal, aiming to solve the bottleneck problems of unsatisfactory correct detection rate of commonly used machine learning methods and inappropriate malicious behavior feature extraction” [0029] “the mobile terminal malicious behavior feature extraction method integrating independence and continuity analysis, and train the samples for robustness”). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify said references to also include the prediction of the intentional malicious/corruption of data in order to improve the accuracy of the machine learning model.
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 20200349434) in view of Meng (WO2022116421) further in view of Official Notice.
Regarding claim 13, Zhang, Meng and Hu teach the first network entity of claim 1, except for wherein the processor is further configured to cause the first network entity to: terminate a connection that corresponds to the UE based at least in part on the information corresponding to the UE being considered untrusted. However, the examiner submits that the concept of is well known in the art of communication to “terminate a connection that corresponds to the UE based at least in part on the information corresponding to the UE being considered untrusted”. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Zhang’s MLM and Meng’s teaching of MLM of a UE to terminate a connection that corresponds to the UE based at least in part on the information corresponding to the UE being considered untrusted to prevent the UE from doing harm to the network.
Regarding claim 14, Zhang, Meng and Hu teach the first network entity of claim 1, except for wherein the processor is further configured to cause the first network entity to: restrict wireless service for the UE based at least in part on the information corresponding to the UE being considered untrusted. However, the examiner submits that the concept of is well known in the art of communication to “restrict wireless service for the UE based at least in part on the information corresponding to the UE being considered untrusted”. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Zhang’s MLM and Meng’s teaching of MLM of a UE to include the well-known in the art concept of restricting wireless service for the UE based at least in part on the information corresponding to the UE being considered untrusted to prevent the UE from doing harm to the network.
Regarding claim 42, Zhang, Meng and Hu teach the method of claim 28, further comprising: outputting a parameter associated with the machine learning model to one or more UEs, wherein the UE is excluded from the one or more UEs based at least in part on the information corresponding to the UE being considered untrusted.
Regarding claim 43, Zhang, Meng and Hu teach method of claim 28, wherein the information corresponding to the UE further comprises one or more measurement values for the UE, or an update to the machine learning model, or a combination thereof.
Claim(s) 9 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 20200349434) in view of Meng (WO-2022116421) further in view of Hu (US 20080178294) further in view of Shatz (US 20170195554)
Regarding claim 9 and 36, Zhang, Meng and Hu teach the first network entity of claim 1, wherein the processor is further configured to cause the first network entity to: store a list of trusted UEs, or a list of untrusted UEs, or both based at least in part on the information corresponding to the UE being considered one of untrusted or trusted (Zhang “[0060] the outlier detection notification can include information that classifies the data sample as an outlier. In other implementations, the outlier detection notification can also include a particular outlier ratio and/or confidence score determined for the data sample that resulted in classification of the data sample as an outlier”; Meng [0069] calculate the recommended reputation value according to formula (1); based on the recommended reputation value, determine whether the worker node is a trusted node. If it is a trusted node, according to the interaction situation, .. and upload the initial direct reputation value to the reputation blockchain for safekeeping”). Zhang does not explicitly teach “additional information corresponding to an additional UE being considered trusted”.
In an analogous art, Shatz teaches “additional information corresponding to an additional UE being considered trusted” ([0473] “The example graphical user interface 3600 also displays a list of stored trusted devices 3620 that includes devices with which the mobile device has previously established an optical narrowcasting ad-hoc network... The trusted device list may also display an indication of which trusted devices are currently connected to the mobile device and other information associated with trusted (or untrusted) devices”). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Zhang's teaching of untrusted list to also include Shatz's teaching of storing the trusted list in order to allow the system not only avoid the bad UE but also allow the system to know which trusted/good UE to interact with.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-12, 15-16 and 28-43 have been considered but are moot in view of new ground of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUNG L LAM whose telephone number is (571)272-6497. The examiner can normally be reached Monday -Thursday 9-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Anderson can be reached on 571-272-4177. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUNG L LAM/Examiner, Art Unit 2646
/MATTHEW D. ANDERSON/Supervisory Patent Examiner, Art Unit 2646