DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 1-8 in the reply filed on 2025-12-29 is acknowledged. Claims 9-30 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention(s), and new claims 31-52 fall within the elected invention.
Claim Objections
Claim 42 is objected to because of the following informalities: in line 1, “wherein means for receiving” should read “wherein the means for receiving”. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. The following limitations recite “means” plus functional language and are being treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph and are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof:
“means for receiving” in claims 39, 40, 41, 42, 43, and 46 corresponds to the receiver 610 of Fig. 6 and ¶¶0228-0240, the receivers 710 and 725 of Fig. 7 and ¶¶0241-0247, the receiver 825 of Fig. 8 and ¶¶0248-0259, and/or the transceiver 910 and/or antenna 915 of Fig. 9 and ¶¶0290-0301, and functional equivalents;
“means for transmitting” in claims 39, 45, and 46 corresponds to the transmitter 615 of Fig. 6 and ¶¶0228-0240, the transmitters 715 and 730 of Fig. 7 and ¶¶0241-0247, the transmitter 830 of Fig. 8 and ¶¶0248-0259, and/or the transceiver 910 and/or antenna 915 of Fig. 9 and ¶¶0290-0301, and functional equivalents; and
“means for updating” in claims 39 and 46 corresponds to the communications manager 620 (which may include a processor) of Fig. 6 and ¶¶0228-0240, the communications manager 720 of Fig. 7 and ¶¶0241-0247, the communications manager 820 of Fig. 8 and ¶¶0248-0259, and/or the communications manager 920, memory 925, and/or processor 935 of Fig. 9 and ¶¶0290-0301, and functional equivalents.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-7, 31-37, 39-45, and 47-52 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2022/0343109 to Haile in view of U.S. Patent Publication No. 2024/0048988 to Pravinchandra Bhatt et al. (“Pravinchandra Bhatt”) and U.S. Patent Publication No. 2020/0082270 to Gu et al. (“Gu”).
As to claim 1 (and similarly applied to claims 31, 39, and 47), Haile discloses an apparatus for wireless communications at a first network entity, comprising: one or more processors; memory coupled with the one or more processors; and instructions stored in the memory and executable by the one or more processors (Figs. 3 and 4, Data Pipeline System 312; ¶0044) to cause the apparatus to: receive a first dataset (Figs. 3 and 4; ¶¶0048-0058. Specifically, ¶0052, "data sources 301 transfer additional unprocessed data sets to data pipeline system 312"); transmit, to a second network entity, the first dataset for a legitimacy test of the first dataset, the legitimacy test to determine a validity of the first dataset based at least in part on at least one second dataset (Figs. 3 and 4; ¶¶0048-0058. Specifically, ¶¶0053-0054); receive, from the second network entity, a message comprising information associated with a result of the legitimacy test of the first dataset (Figs. 3 and 4; ¶¶0048-0058. Specifically, ¶0054, "Application 333 transfers error notifications to database 321 and data pipeline system 312").
Haile does not disclose: that the dataset is for a predictive model, the first dataset corresponding to one or more measurements associated with a user equipment (UE); update the predictive model using one or more datasets based at least in part on the information, wherein the one or more datasets comprise the first dataset or exclude the first dataset based at least in part on the result of the legitimacy test.
However, Pravinchandra Bhatt discloses: that the dataset is for a predictive model (Fig. 11, ¶¶0062-0063, and ¶¶0139-0140), the first dataset corresponding to one or more measurements associated with a user equipment (UE) (Fig. 11, ¶0063, and ¶¶00139-00140).
Additionally, Gu discloses: update the predictive model using one or more datasets based at least in part on the information, wherein the one or more datasets comprise the first dataset or exclude the first dataset based at least in part on the result of the legitimacy test (Fig. 5, steps 530-555 and ¶¶0101-0102).
Haile, Pravinchandra Bhatt, and Gu are considered to be similar to the claimed invention because they are in one or more of the same fields of: computing arrangements based on machine learning models, including network architecture such as interconnection topology or combinations of networks; computing arrangements based on specific computational models, i.e. machine learning; security arrangements, authentication, and protecting privacy or anonymity in wireless communications networks, including the detection or prevention of fraud; and/or supervised, distributed, or federated learning. As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Haile to incorporate the teachings of Pravinchandra Bhatt to include: that the dataset is for a predictive model, the first dataset corresponding to one or more measurements associated with a user equipment (UE). Doing so would allow for "realizing improved robustness of artificial intelligence or machine learning capabilities against compromised input" (Pravinchandra Bhatt, ¶0001), which would improve cellular network optimizations such as "network energy saving, load balancing, and mobility optimization" (Pravinchandra Bhatt, ¶0004).
Additionally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Haile to incorporate the teachings of Gu to include: update the predictive model using one or more datasets based at least in part on the information, wherein the one or more datasets comprise the first dataset or exclude the first dataset based at least in part on the result of the legitimacy test. Doing so would "provide a secure-trusted execution environment-based deep learning training system that achieves the goals of preserving training data privacy, denying poisoned data from illegitimate data sources, and generating accountable models" (Gu, ¶0025).
As to claim 2 (and similarly applied to claims 32, 40, and 48), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 1, wherein the instructions to receive the message are executable by the one or more processors to cause the apparatus to: receive, as part of the message (Haile, Figs. 3 and 4; ¶¶0048-0058, a message is sent back to the pipeline system 312 as a result of the validity test), an indication that the first dataset is to be included in the one or more datasets based at least in part on a success result of the legitimacy test of the first dataset, wherein the success result of the legitimacy test corresponds to the validity of the first dataset being valid (Gu, Fig. 5, steps 530-555 and ¶¶0101-0102; the datasets that pass the checks are used for training the ML model).
As to claim 3 (and similarly applied to claims 33, 41, and 49), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 1, wherein the instructions to receive the message are executable by the one or more processors to cause the apparatus to: receive, as part of the message (Haile, Figs. 3 and 4; ¶¶0048-0058, a message is sent back to the pipeline system 312 as a result of the validity test), an indication that the first dataset is to be excluded from the one or more datasets based at least in part on a failure result of the legitimacy test of the first dataset, wherein the failure result of the legitimacy test corresponds to the validity of the first dataset being corrupt (Gu, Fig. 5, steps 530-555 and ¶¶0101-0102; the datasets that do not pass the checks are excluded from training the ML model).
As to claim 4 (and similarly applied to claims 34, 42, and 50), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 1, wherein the instructions to receive the message are executable by the one or more processors to cause the apparatus to: receive, as part of the message, one or more performance metrics associated with the first dataset based at least in part on the legitimacy test, wherein the one or more performance metrics comprise a performance relation metric associated with the first dataset and the at least one second dataset, a performance difference associated with the first dataset and the at least one second dataset, or a combination thereof (Haile, Figs. 3 and 4; ¶¶0054-0058).
As to claim 5 (and similarly applied to claims 35, 43, and 51), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 1, wherein the legitimacy test comprises a second predictive model, and wherein the instructions to receive the message are executable by the one or more processors to cause the apparatus to: receive, as part of the message, an indication that the result of the legitimacy test is based at least in part on a performance metric associated with the second predictive model using the first dataset and the at least one second dataset (Haile, Figs. 3 and 4; ¶¶0048-0058).
As to claim 6 (and similarly applied to claims 36, 44, and 52), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 5, wherein the message further indicates that the second predictive model is trained on the at least one second dataset and indicates that the first dataset is used as test data for the second predictive model (Haile, Figs. 3 and 4; ¶0054; the alert indicates that the data set is too dissimilar from the gold standards 335, i.e., the alert indicates that the second predictive model was trained on the second data set (gold standards 335), and the output data set (first data set) was used as test data), or indicates that the first dataset is used as an input dataset to train the second predictive model and indicates that the at least one second dataset is used as test data for the second predictive model.
As to claim 7 (and similarly applied to claims 37 and 45), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 1, wherein the instructions are further executable by the one or more processors to cause the apparatus to: transmit, to the second network entity, a second message indicating a request for the second network entity to perform the legitimacy test of the first dataset, wherein receiving the message is based at least in part on the request (Haile, Figs. 3 and 4; ¶0053).
Claims 8, 38, and 46 are rejected under 35 U.S.C. 103 as being unpatentable over Haile in view of Pravinchandra Bhatt and Gu, and further in view of U.S. Patent Publication No. 2024/0121161 to Chen et al. (“Chen”).
As to claim 8 (and similarly applied to claims 38 and 46), Haile in view of Pravinchandra Bhatt and Gu discloses the apparatus of claim 1.
Haile in view of Pravinchandra Bhatt and Gu does not disclose: wherein the instructions are further executable by the one or more processors to cause the apparatus to: transmit a second message indicating a request for datasets associated with one or more performance metrics that satisfy a performance threshold; receive a third message comprising a dataset based at least in part on the request; and update the predictive model using the dataset.
However, Chen discloses: wherein the instructions are further executable by the one or more processors to cause the apparatus to: transmit a second message indicating a request for datasets associated with one or more performance metrics that satisfy a performance threshold; receive a third message comprising a dataset based at least in part on the request; and update the predictive model using the dataset (Fig. 3 and ¶¶0066-0075).
Haile, Pravinchandra Bhatt, Gu, and Chen are considered to be similar to the claimed invention because they are in one or more of the same fields of: computing arrangements based on machine learning models, including network architecture such as interconnection topology or combinations of networks; computing arrangements based on specific computational models, i.e. machine learning; security arrangements, authentication, and protecting privacy or anonymity in wireless communications networks, including the detection or prevention of fraud; and/or supervised, distributed, or federated learning. As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Haile in view of Pravinchandra Bhatt and Gu to incorporate the teachings of Chen to include: wherein the instructions are further executable by the one or more processors to cause the apparatus to: transmit a second message indicating a request for datasets associated with one or more performance metrics that satisfy a performance threshold; receive a third message comprising a dataset based at least in part on the request; and update the predictive model using the dataset. Doing so would "enable a base station and a core network to exchange data analytics information and make statistics and prediction according to the data analytics information, thereby realizing network optimization on the core network side and the base station side and improving the quality of network communication" (Chen, ¶0004).
References Cited
Chen, Jiajun et al. (2024). Network optimization methods and apparatus, electronic device and storage medium (US 2024/0121161 A1). Filed 2022-06-14.
Gu, Zhongshu et al. (2020). Verifiable deep learning training service (US 2020/0082270 A1). Filed 2018-09-07.
Haile, J. Mitchell (2022). System and method for automatic data consistency checking using automatically defined rules (US 2022/0343109 A1). Filed 2022-04-18.
Pravinchandra Bhatt, R. et al. (2024). Robustness of artificial intelligence or machine learning capabilities against compromised input (US 2024/0048988 A1). Filed 2023-08-01.
Other Pertinent References
The following prior art made of record is considered pertinent to applicant’s disclosure:
Albert, Samuel et al. (2022). Generating and calibrating signal strength prediction in a wireless network (US 20220053345 A1). Filed 2021-08-03.
Anderson, Blake Harrell et al. (2019). Detecting dataset poisoning attacks independent of a learning algorithm (US 20190251479 A1). Filed 2018-02-09.
Baracaldo-Angel, Nathalie et al. (2023). Detecting and mitigating poison attacks using data provenance (US 11689566 B2). Filed 2018-07-10.
Elisha, Oren et al. (2026). System and method for improving machine learning models by detecting and removing inaccurate training data (US 12524707 B2). Filed 2024-01-30.
Ezrielev, Ofir et al. (2025). System and method for proactively identifying poisoned training data used to train artificial intelligence models (US 12481793 B2). Filed 2022-12-29.
Farhady Ghalaty, Nahid et al. (2020). System and method for facilitating prediction model training (US 10867245 B1). Filed 2019-10-17.
Haile, J. Mitchell (2025). System and method for automatic data consistency checking using automatically defined rules (US 12306907 B2). Filed 2022-04-18.
Han, Yufei et al. (2023). Systems and methods for utilizing federated machine-learning to protect against potentially malicious data (US 11783031 B1). Filed 2020-03-31.
Jung, Namsoon (2021). Generating training and validation data for machine learning (US 20210056412 A1). Filed 2020-02-28.
Khan, Tufail Ahmed et al. (2024). Database management systems and methods for datasets (US 12164503 B1). Filed 2023-11-30.
Lee, JONG HWA et al. (2021). Cleaning dataset for neural network training (US 20210303923 A1). Filed 2021-02-08.
Liu, Changwei et al. (2023). Identifying and correcting vulnerabilities in machine learning models (US 20230274003 A1). Filed 2022-02-28.
Patel, Hima et al. (2022). Quality assessment of machine-learning model dataset (US 20220101182 A1). Filed 2020-09-28.
Reddy, Vishruth et al. (2022). Methods and systems for preventing corruption of stateful data (US 20220335021 A1). Filed 2021-04-16.
Saxena, Sharoon et al. (2023). Machine learning pipeline with data quality framework (US 20230419130 A1). Filed 2022-06-28.
Svennebring, Jonas et al. (2019). Link performance prediction technologies (US 20190319868 A1). Filed 2019-06-25.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL H LEONARD whose telephone number is (571)272-5720. The examiner can normally be reached Monday – Friday, 7am – 4pm (PT).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuwen (Kevin) Pan can be reached at (571)272-7855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL H. LEONARD/Examiner, Art Unit 2649 /YUWEN PAN/Supervisory Patent Examiner, Art Unit 2649