Prosecution Insights
Last updated: April 19, 2026
Application No. 17/653,435

METHOD AND APPARATUS FOR SUPPORT OF MACHINE LEARNING OR ARTIFICIAL INTELLIGENCE TECHNIQUES IN COMMUNICATION SYSTEMS

Non-Final OA §102§103§112
Filed
Mar 03, 2022
Examiner
WONG, WILLIAM
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
4y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
120 granted / 397 resolved
-24.8% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
33 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to communications filed on 03/03/2022. Claims 1-20 are pending and have been examined. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted was filed on 09/16/2022. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 7, 14, and 20 are objected to because of the following informalities: As per claim 7, it appears that e.g. the word “or” should be inserted at the end of line 8 (after “…outputs,”). This similarly applies to claims 14 and 20. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6-7, 13-14, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claim 6, there is lack of antecedent basis for “the ML approach for the corresponding operation” in line 3. There is lack of antecedent basis for “the corresponding operation(s)” in line 5. This similarly applies to claims 13 and 20. Claims 7 and 14 depend on claims 6 and 13 and thus are also indefinite. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 4, 8-9, 11, 15-16 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Taherzadeh Boroujeni et al. (US 20220103221 A1). As per independent claim 1, Taherzadeh Boroujeni teaches a user equipment (UE), comprising: a transceiver (e.g. in paragraph 47, “UE 120 includes a transceiver”) configured to receive, from a base station, machine learning/artificial intelligence (ML/AI) configuration information including one or more of enabling/disabling an ML approach for one or more operations, one or more ML models to be used for the one or more operations, trained model parameters for the one or more ML models, or whether ML model parameters received from the UE at the base station will be used (e.g. in paragraphs 57, 67, and 77, “base station 110 may transmit a machine learning component to the UEs… machine learning component for any number of different types of operations, transmissions, and/or user experience enhancements, among other examples… the base station 110 may transmit, and the UE 120 may receive, a configuration associated with a machine learning component”); and a processor operatively coupled to the transceiver (e.g. in paragraph 47, “controller/processor”), the processor configured to generate assistance information for updating the one or more ML models based on at least a portion of the configuration information (e.g. in paragraph 64, “the update may include an updated set of model parameters w.sup.(n), a difference between the updated set of model parameters w.sup.(n) and a prior set of model parameters w.sup.(n-1), the set of gradients g.sub.k.sup.(n), and/or an updated machine learning component (e.g., an updated neural network model), among other examples”), wherein the transceiver is further configured to transmit the assistance information to the base station (e.g. in paragraph 65, “UEs 120 may each transmit their respective local updates… base station 110…may aggregate the updates received from the UEs” and figure 3). As per claim 2, the rejection of claim 1 is incorporated and Taherzadeh Boroujeni further teaches wherein one of the processor is further configured to perform an inference regarding the one or more operations based on the configuration information and local data, or the transceiver is configured to receive, from the base station, control signaling based on an inference result, the control signaling including one of a command based on the inference result and updated configuration information (e.g. in paragraphs 59 and 61, “UEs 120 may each locally train the machine learning component using training data collected by the UEs… each be configured to provide updates to the base station 110 multiple times (e.g., periodically, on demand, and/or upon updating a local machine learning component… update may include any updated information, determined based at least in part on a training procedure associated with the machine learning component. An update may include, for example, an updated machine learning component (e.g., an updated neural network model), a set of updated parameters (e.g., a set of updated weights of a neural network), a set of gradients associated with a loss function of the machine learning component”). As per claim 4, the rejection of claim 1 is incorporated and Taherzadeh Boroujeni further teaches wherein the configuration information specifies a federated learning ML model to be used for the one or more operations (e.g. in paragraphs 54, 57, 67, and 77, “federated learning for machine learning components… base station 110 may transmit a machine learning component to the UEs… machine learning component for any number of different types of operations, transmissions, and/or user experience enhancements, among other examples… the base station 110 may transmit, and the UE 120 may receive, a configuration associated with a machine learning component”), the federated learning ML model involving model training at the UE based on local data available at UE and reporting of updated model parameters according to the configuration information (e.g. in paragraphs 59 and 61, “UEs 120 may each locally train the machine learning component using training data collected by the UEs… update may include any updated information, determined based at least in part on a training procedure associated with the machine learning component. An update may include, for example, an updated machine learning component (e.g., an updated neural network model), a set of updated parameters (e.g., a set of updated weights of a neural network), a set of gradients associated with a loss function of the machine learning component”). Claims 8-9 and 11 are the method claims corresponding to UE claims 1-2 and 4, and are rejected under the same reasons set forth. Claims 15 and 18 are the method claims corresponding to UE claims 1 and 4, and are rejected under the same reasons set forth, and Taherzadeh Boroujeni further teaches a base station (BS), comprising: a processor configured to generate and a transceiver operatively coupled to the processor and configured to transmit and receive (e.g. in paragraphs 11 and 48, “one or more processors of a server device, may cause the server device to transmit, to a client device, a configuration associated with a machine learning component… to receive…feedback from the client device… base station 110 includes a transceiver… transceiver may be used by a processor”). As per claim 16, the rejection of claim 1 is incorporated and Taherzadeh Boroujeni further teaches wherein one of the transceiver is further configured to receive, from the UE, an inference regarding the one or more operations based on the configuration information and local data at the UE, the processor is further configured to perform an inference regarding the one or more operations based on assistance information received from the one or more UEs including the UE, or the transceiver is further configured to receive an inference regarding the one or more operations based on the assistance information received from the one or more UEs from another network entity (e.g. in paragraphs 59. 61, and 66, “UEs 120 may each locally train the machine learning component using training data collected by the UEs… each be configured to provide updates to the base station 110 multiple times (e.g., periodically, on demand, and/or upon updating a local machine learning component… update may include any updated information, determined based at least in part on a training procedure associated with the machine learning component. An update may include, for example, an updated machine learning component (e.g., an updated neural network model), a set of updated parameters (e.g., a set of updated weights of a neural network), a set of gradients associated with a loss function of the machine learning component… update the global machine learning component by normalizing the local datasets by treating each dataset size (e.g., represented by D.sub.k) as being equal”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Taherzadeh Boroujeni et al. (US 20220103221 A1) in view of Wang et al. (US 20210326726 A1). As per claim 3, the rejection of claim 1 is incorporated and Taherzadeh Boroujeni further teaches wherein the assistance information comprises at least one of local data regarding the UE, including one or more of UE location, UE trajectory, or estimated downlink (DL) channel status, inference results regarding the one or more operations, or updated model parameters based on local training of the one or more ML models, for updating the one or more ML models (e.g. in paragraphs 59 and 61, “UEs 120 may each locally train the machine learning component using training data collected by the UEs… update may include any updated information, determined based at least in part on a training procedure associated with the machine learning component. An update may include, for example, an updated machine learning component (e.g., an updated neural network model), a set of updated parameters (e.g., a set of updated weights of a neural network), a set of gradients associated with a loss function of the machine learning component”), and reporting of the assistance information is triggered periodically, aperiodically, or semi- persistently (e.g. in paragraphs 59 and 61, “UEs…each be configured to provide updates to the base station 110 multiple times (e.g., periodically, on demand, and/or upon updating a local machine learning component”), but does not specifically teach the assistance information is reported using L1/L2 including one of an uplink control information (UCI), a medium access control (MAC) control element (MAC-CE), a physical uplink control channel (PUCCH), a physical uplink shared channel (PUSCH), or a physical random access channel (PRACH). However, Wang teaches assistance information being reported using L1/L2 including one of an uplink control information (UCI), a medium access control (MAC) control element (MAC-CE), a physical uplink control channel (PUCCH), a physical uplink shared channel (PUSCH), or a physical random access channel (PRACH) (e.g. in paragraph 36, “the transceiver configured to communicate the report may be further configured to transmit, to the BS in a first subband of a plurality of subbands, sampled data for updating the machine learning-based network. In some aspects, the first subband includes a plurality of physical uplink shared channels (e.g., PUSCHs)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Taherzadeh Boroujeni to include the teachings of Wang because one of ordinary skill in the art would have recognized the benefit of facilitating communications (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]). Claims 10 and 17 correspond to claim 3, and are rejected under the same reasons set forth. Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Taherzadeh Boroujeni et al. (US 20220103221 A1) in view of Pezeshki et al. (US 20220116764 A1). As per claim 5, the rejection of claim 1 is incorporated, but Taherzadeh Boroujeni does not specifically teach wherein the transceiver is configured to transmit, to the base station, UE capability information for use by the base station in generating the configuration information, the UE capability information including one or more of support by the UE for the ML approach for the one or more operations, and support by the UE for model training at the UE based on local data available at UE. However, Pezeshki teaches transmitting UE capability information for use by a base station in generating configuration information, the UE capability information including one or more of support by the UE for an ML approach for one or more operations, and support by the UE for model training at the UE based on local data available at UE (e.g. in paragraphs 31 and 97-105, “each UE trains the model locally, and sends back either updated neural network model weights or gradient updates… The base station receives the updates… sends the updated model to the UEs, and the process repeats, round after round, until a desired performance level from the global model is obtained… machine learning hardware capabilities of the UE, such as capabilities of the GPU, NPU,... a number of operations per second or a number of multiply-accumulate (MAC) operations per second, etc… base station may decide whether a reporting UE is a fast UE or a slow UE based on the reported machine learning hardware capability. The base station may schedule the UEs according to speed ranges… parameters affecting the turnaround time include a learning rate for local training… if the UE is in power savings mode, the UE may decide not to participate in federated learning…setting the turnaround time to infinity”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Taherzadeh Boroujeni to include the teachings of Pezeshki because one of ordinary skill in the art would have recognized the benefit of obtaining improved performance. Claims 12 and 19 correspond to claim 5, and are rejected under the same reasons set forth. Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Taherzadeh Boroujeni et al. (US 20220103221 A1) in view of Ryu et al. (US 20210328630 A1). As per claim 6, the rejection of claim 1 is incorporated, but Taherzadeh Boroujeni does not specifically teach wherein the configuration information includes one or more of N indices each corresponding to a different one of the one or more operations and indicating enabling or disabling of the ML approach for the corresponding operation, M indices each corresponding to a different one of M predefined ML algorithms and indicating an ML algorithm to be employed for the corresponding operation(s), or K indices each corresponding to a different one of K predefined ML operation modes and indicating an ML operation mode to be employed, each of the ML operation modes including one or more operations, an ML algorithm to be employed for a corresponding one of the one or more operations, and ML model parameters for the ML algorithm to be employed for the corresponding one of the one or more operations. However, Ryu teaches configuration information including one or more of N indices each corresponding to a different one of one or more operations and indicating enabling or disabling of a ML approach for a corresponding operation, M indices each corresponding to a different one of M predefined ML algorithms and indicating an ML algorithm to be employed for corresponding operation(s), or K indices each corresponding to a different one of K predefined ML operation modes and indicating an ML operation mode to be employed, each of the ML operation modes including one or more operations, an ML algorithm to be employed for a corresponding one of the one or more operations, and ML model parameters for the ML algorithm to be employed for the corresponding one of the one or more operations (e.g. in paragraphs 53-57, “base station may develop a number of different predictive models for each of a number of different functions that may be used to determine various beamforming parameters. For example, multiple neural network (NN), artificial intelligence (AI), or machine learning (ML) models may be generated for each of multiple different functions… provide the models to a UE, and the UE may then use such models… providing predictive models and indications of a model to use at a UE may provide enhanced efficiency and reliability”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Taherzadeh Boroujeni to include the teachings of Ryu because one of ordinary skill in the art would have recognized the benefit of enhancing efficiency and/or reliability. Claim 13 corresponds to claim 6, and is rejected under the same reasons set forth. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Taherzadeh Boroujeni et al. (US 20220103221 A1) in view of Ryu et al. (US 20210328630 A1), and further in view of Pezeshki et al. (US 20220116764 A1). As per claim 7, the rejection of claim 6 is incorporated, but the combination does not specifically teach wherein one of the ML algorithm comprises supervised learning and the ML model parameters comprise features, weights, and regularization, the ML algorithm comprises reinforcement learning and the ML model parameters comprise a set of states, a set of actions, a state transition probability, or a reward function, the ML algorithm comprises a deep neural network and the ML model parameters comprise a number of layers, a number of neurons in each layer, weights and bias for each neuron, an activation function, inputs, or outputs, the ML algorithm comprises federated learning and the ML model parameters comprise whether the UE is configured for local training and/or reporting, a number of iterations for local training before polling, and local batch size. However, Pezeshki teaches one of an ML algorithm comprising supervised learning and ML model comprising comprise features, weights, and regularization, the ML algorithm comprising reinforcement learning and the ML model parameters comprise a set of states, a set of actions, a state transition probability, or a reward function, the ML algorithm comprising a deep neural network and the ML model parameters comprising a number of layers, a number of neurons in each layer, weights and bias for each neuron, an activation function, inputs, or outputs, the ML algorithm comprising federated learning and the ML model parameters comprising whether the UE is configured for local training and/or reporting, a number of iterations for local training before polling, and local batch size (e.g. in paragraphs 31, 97-105, and 140-142, “federated learning… parameters affecting the turnaround time include a learning rate for local training… a number of iterations (e.g., stochastic gradient descent iterations) needed before deriving and sending an update… batch size for local training… if the UE is in power savings mode, the UE may decide not to participate in federated learning…setting the turnaround time to infinity”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Pezeshki because one of ordinary skill in the art would have recognized the benefit of obtaining improved performance. Claims 14 and 20 correspond to claim 7, and are rejected under the same reasons set forth. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, Bai et al. (US 20210091838 A1) teaches “UE 404 may then receive the downlink data and/or control information 446 from the base station 402 based on the at least one CSI 436 and the at least one parameter 438. For example, the base station 402 may determine the predicted CSI based on the at least one CSI 436 and the at least one parameter 438 (e.g., based on evaluation 442 of the predictive model), and the base station 402 may transmit information indicating a transmission configuration that is based on the predicted CSI to the UE” (e.g. in paragraph 89). Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.W/Examiner, Art Unit 2144 05/17/2025 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Mar 03, 2022
Application Filed
May 17, 2025
Non-Final Rejection — §102, §103, §112
Aug 27, 2025
Response Filed
Aug 27, 2025
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572252
CONTROLLING A 2D SCREEN INTERFACE APPLICATION IN A MIXED REALITY APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12530707
CUSTOMER EFFORT EVALUATION IN A CONTACT CENTER SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12511846
XR DEVICE-BASED TOOL FOR CROSS-PLATFORM CONTENT CREATION AND DISPLAY
2y 5m to grant Granted Dec 30, 2025
Patent 12504944
METHODS AND USER INTERFACES FOR SHARING AUDIO
2y 5m to grant Granted Dec 23, 2025
Patent 12423561
METHOD AND APPARATUS FOR KEEPING STATISTICAL INFERENCE ACCURACY WITH 8-BIT WINOGRAD CONVOLUTION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
57%
With Interview (+26.9%)
4y 11m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month