Prosecution Insights
Last updated: April 19, 2026
Application No. 18/845,626

MODEL TRAINING AND DEPLOYING METHOD AND RELATED COMMUNICATION APPARATUS

Non-Final OA §102§103
Filed
Sep 10, 2024
Examiner
DAILEY, THOMAS J
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
95%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
694 granted / 859 resolved
+22.8% vs TC avg
Moderate +15% lift
Without
With
+14.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 859 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1, 2, 4-6, 8-11, 13, 17, 18, 20-22, 24, 25, 27, 28, and 32 are pending. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statements (IDS) submitted on prior to this Office Action are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 4-6, 8-10, 13, 17, 18, 20-22, 24, 25, 27, 28, and 32 are rejected under 35 U.S.C. 102(a)(1)/(2) as being anticipated by Yoo et al (US Pub. No. 2021/0266787; cited on IDS), hereafter, “Yoo.” As to claim 1, Yoo discloses a model training and deploying method, performed by a user equipment (UE) (Fig. 4, label 415 and [0108]), the method comprising: reporting capability information to a network device, the capability information being used for indicating support capabilities of the UE for at least one of an Artificial Intelligence (AI) or a Machine Learning (ML) (Fig. 4, labels 405 (“network device”), 420 and [0109], particularly, “At 420, UE 415 may transmit a capability message to base station 405. For example, UE 415 may transmit a message indicating that UE 415 has the capability of compressing a number of measurements using an encoder NN.”); acquiring at least one of model information of an encoder model to be trained or model information of a decoder model to be trained, sent by the network device (Fig. 4, label 425 and [0110]-[0111], particularly, “Additionally or alternatively, base station 405 may transmit an indication of the measurements on which to perform compression. In some examples, the configuration message may indicate a periodicity for NN decoder coefficient feedback. Additionally or alternatively, the configuration message may indicate a priority rule for the UE transmitting an encoder output.”); generating the encoder model and the decoder model based on the at least one of the model information of the encoder model to be trained or the model information of the decoder model to be trained (Fig. 4 and [0110]-[0111], particularly, “In some examples, the configuration message may indicate a periodicity for NN decoder coefficient feedback…In some examples, UE 415 may include an auto-encoder with an encoder NN and a decoder NN. In some cases, UE 415 may train the encoder NN based on historical measurements at UE 415, a performance metric from UE 415, or both. Additionally or alternatively, UE 415 may train the decoder NN at the UE 415 based on the trained encoder NN.”) ; and sending the model information of the decoder model to the network device, the model information of the decoder model being used for deploying the decoder model (Fig. 4 and [0110]-[0111], particularly, “In some cases, UE 415 may transmit an indication of the trained decoder NN to base station 405. For example, at 430, UE 415 may transmit a set of NN decoder coefficients to base station 405 (e.g., one or more decoder coefficients based on the trained NN). For example, UE 415 may periodically transmit NN decoder coefficients to base station 405 based on the periodicity for NN decoder coefficient feedback configured at 425.”). As to claim 17, Yoo discloses a model deploying method, performed by a network device (Fig. 4, label 405 and [0108]), the method comprising: acquiring capability information reported by a user equipment (UE), the capability information being used for indicating support capabilities of UE for at least one of an Artificial Intelligence (AI) or a Machine Learning (ML) (Fig. 4, labels 415 (“UE”), 420 and [0109], particularly, “At 420, UE 415 may transmit a capability message to base station 405. For example, UE 415 may transmit a message indicating that UE 415 has the capability of compressing a number of measurements using an encoder NN.”); sending at least one of model information of an encoder model to be trained or model information of a decoder model to be trained to the UE based on the capability information device (Fig. 4, label 425 and [0110]-[0111], particularly, “Additionally or alternatively, base station 405 may transmit an indication of the measurements on which to perform compression. In some examples, the configuration message may indicate a periodicity for NN decoder coefficient feedback. Additionally or alternatively, the configuration message may indicate a priority rule for the UE transmitting an encoder output.”); acquiring model information of the decoder model sent by the UE, the model information of the decoder model being used for deploying the decoder model (Fig. 4 and [0110]-[0111], particularly, “In some cases, UE 415 may transmit an indication of the trained decoder NN to base station 405. For example, at 430, UE 415 may transmit a set of NN decoder coefficients to base station 405 (e.g., one or more decoder coefficients based on the trained NN). For example, UE 415 may periodically transmit NN decoder coefficients to base station 405 based on the periodicity for NN decoder coefficient feedback configured at 425.”).; and generating the decoder model based on the model information of the decoder model (Fig. 4, [0110]-[0111], and [0117]) and, particularly, “In some cases, UE 415 may transmit an indication of the trained decoder NN to base station 405. For example, at 430, UE 415 may transmit a set of NN decoder coefficients to base station 405 (e.g., one or more decoder coefficients based on the trained NN). For example, UE 415 may periodically transmit NN decoder coefficients to base station 405 based on the periodicity for NN decoder coefficient feedback configured at 425…At 460, base station 405 may decompress the encoder output from UE 415 using a decoder NN to obtain a number of measurements. The decoder NN may be based on the NN decoder coefficients received at 430. The measurements may correspond to a number of bits for reporting the measurements that is greater than the number of bits included in the encoder output.”). As to claim 32, it is rejected by similar rationale by that set forth in claim 1’s rejection. As to claims 2 and 18, Yoo discloses a model comprises at least one of: an AI model; or an ML model; or wherein the capability information comprises at least one of: whether the UE supports the AI; whether the UE supports the ML; at least one of a type of an AI model or a type of an ML model supported by the UE (Fig. 4, labels 415 (“UE”), 420 and [0109], particularly, “At 420, UE 415 may transmit a capability message to base station 405. For example, UE 415 may transmit a message indicating that UE 415 has the capability of compressing a number of measurements using an encoder NN.”); or maximum support capability information of the UE for a model, the maximum support capability information comprising structural information of a most complex model supported by the UE. As to claim 4, Yoo discloses generating the encoder model and the decoder model based on the at least one of the model information of the encoder model to be trained or the model information of the decoder model to be trained comprises: at least one of: deploying the encoder model to be trained based on the model information of the encoder model to be trained, or deploying the decoder model to be trained based on the model information of the decoder model to be trained; determining sample data based on at least one of measurement information or historical measurement information of the UE; and training the at least one of the encoder model to be trained or the decoder model to be trained based on the sample data to generate the encoder model and the decoder model (Fig. 4 and [0110]-[0111], [0117). As to claims 5 and 21, Yoo discloses the model information of the decoder model comprises at least one of: a model type of a model; or model parameters of a model (Fig. 4 and [0110]-[0111], [0117). As to claims 6 and 22, Yoo discloses acquire indication information sent by the network device, the indication information being used for indicating an information type when the UE reports to the network device; the information type comprising at least one of original reported information that is not encoded by the encoder model or information obtained by encoding, with the encoder model, the original reported information (Fig. 4 and [0110]-[0111]); and reporting to the network device based on the indication information; wherein the reported information is information reported by the UE to the network device; the reported information comprises Channel State Information (CSI) information ([0107]-[0111]); and wherein the CSI information comprises at least one of: channel information; feature matrix information of a channel; feature vector information of the channel; Precoding Matrix Indicator (PMI); Channel Quality Indicator (CQI); Rank Indicator (RI); Reference Signal Received Power (RSRP); Reference Signal Received Quality (RSRQ); Signal-to-Interference plus Noise Ratio (SINR); or reference signal resource indicator ([0107]-[0111]). As to claim 8, Yoo discloses the information type indicated by the indication information comprises the information obtained by encoding, with the encoder model, the original reported information; and wherein reporting to the network device based on the indication information comprises: encoding the reported information with the encoder model; and reporting the encoded information to the network device (Fig. 4 and [0115]-[0117]). As to claim 9, Yoo discloses updating the encoder model and the decoder model to generate an updated encoder model and an updated decoder model (Fig. 4 and [0110]-[0111] and [0117]). As to claim 10, Yoo discloses updating the encoder model and the decoder model comprises: acquiring update indication information sent by the network device, the update indication information being used for instructing the UE to adjust model parameters, or the update indication information comprising at least one of model information of a new encoder model or model information of a new decoder model, a type of the new encoder model and a type of the new decoder model being different from a type of an original encoder model and a type of an original decoder model (Fig. 4 and [0100]-[0103]); determining the new encoder model and the new decoder model based on the update indication information (Fig. 4 and [0100]-[0103]); and retraining the new encoder model and the new decoder model to obtain an updated encoder model and an updated decoder model (Fig. 4 and [0100]-[0103]); wherein determining the new encoder model and the new decoder model based on the update indication information comprises: obtaining, by the UE, the new encoder model and the new decoder model by adjusting model parameters of the original encoder model and the original decoder model in response to the update indication information being used for instructing the UE to adjust the model parameters; or generating, by the UE, the new encoder model and the new decoder model based on at least one of the model information of the new encoder model or the model information of the new decoder model in response to the update indication information comprising at least one of the model information of the new encoder model or the model information of the new decoder model, the type of the new encoder model and the type of the new decoder model being different from the type of the original encoder model and the type of the original decoder model (Fig. 4 and [0100]-[0103]). As to claim 13, Yoo discloses replacing the original encoder model directly with the updated encoder model; or determining differential model information between the model information of the updated encoder model and model information of an original encoder model; and optimizing the original encoder model based on the differential model information; or sending the model information of the updated decoder model to the network device; wherein the model information of the updated decoder model comprises: all model information of the updated decoder model; or differential model information between the model information of the updated decoder model and the model information of the original decoder model ([0100]-[0103]). As to claim 20, Yoo discloses sending the at least one of the model information of the encoder model to be trained or the model information of the decoder model to be trained to the UE based on the capability information comprises: selecting at least one of the encoder model to be trained or the decoder model to be trained based on the capability information, the encoder model to be trained being a model supported by the UE, and the decoder model to be trained being a model supported by the network device (Fig. 4 and [0110]-[0111]); and sending at least one of the model information of the encoder model to be trained or the model information of the decoder model to be trained to the UE (Fig. 4 and [0110]-[0111]). As to claim 24, Yoo discloses the information type indicated by the indication information comprises the information obtained by encoding, with the encoder model, the original reported information; and wherein the method further comprises: decoding information reported by the UE using the decoder model in response to receiving the information reported by the UE (Fig. 4 and [0110]-[0111], [0117]). As to claim 25, Yoo discloses receiving model information of the updated decoder model sent by the UE; and updating the model based on the model information of the updated decoder model; wherein the method further comprises: sending update indication information to the UE; wherein the update indication information is used for instructing the UE to adjust model parameters; or the update indication information comprises at least one of model information of a new encoder model or model information of a new decoder model, and a type of the new encoder model and a type of the new decoder model are different from a type of an original encoder model and a type of an original decoder model (Fig. 4 and [0110]-[0111], [0117]). As to claim 27, Yoo discloses the model information of the updated decoder model comprises: all model information of the updated decoder model; or differential model information between the model information of the updated decoder model and the model information of the original decoder model ([0100]-[0103]). As to claim 28, Yoo discloses updating the model based on the model information of the updated decoder model comprises at least one of: generating an updated decoder model based on the model information of the updated decoder model; and replacing the original decoder model with the updated decoder model for model updating; or optimizing an original decoder model based on the model information of the updated decoder model for model updating ([0100]-[0103]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Yoo in view of Lainema et al (US Pub. No. 2023/0062752), hereafter, “Lainema.” As to claim 11, Yoo discloses the parent claim but does not disclose monitoring a distortion of an original encoder model and a distortion of an original decoder model; and retraining the original encoder model and the original decoder model to obtain an updated encoder model and an updated decoder model in response to the distortions exceeding a first threshold, wherein the distortion of the updated encoder model and the distortion of the updated decoder model are both lower than a second threshold, and the second threshold is less than or equal to the first threshold. However, Lainema discloses monitoring a distortion of an original encoder model and a distortion of an original decoder model ([0196]-[0199]); and retraining the original encoder model and the original decoder model to obtain an updated encoder model and an updated decoder model in response to the distortions exceeding a first threshold, wherein the distortion of the updated encoder model and the distortion of the updated decoder model are both lower than a second threshold, and the second threshold is less than or equal to the first threshold ([0196]-[0199]). Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Yoo and Lainema in order to provide a known and reliable means of reducing distortions in encoders/decoders. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J DAILEY whose telephone number is (571)270-1246. The examiner can normally be reached 9:30am-6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J DAILEY/ Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Sep 10, 2024
Application Filed
Mar 12, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597054
METHOD AND SYSTEM OF FORWARDING CONTACT DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12580953
METHOD AND SYSTEM FOR DETECTING ENCRYPTED FLOOD ATTACKS
2y 5m to grant Granted Mar 17, 2026
Patent 12556589
MEDIA RESOURCE OPTIMIZATION
2y 5m to grant Granted Feb 17, 2026
Patent 12556605
Live Migration Of Clusters In Containerized Environments
2y 5m to grant Granted Feb 17, 2026
Patent 12549399
PROGRESS STATUS AFTER INTERRUPTION OF ONLINE SERVICE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
95%
With Interview (+14.6%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 859 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month