Prosecution Insights
Last updated: April 19, 2026
Application No. 18/926,947

INFORMATION TRANSMISSION METHOD AND APPARATUS

Non-Final OA §102§103§112
Filed
Oct 25, 2024
Examiner
JOO, JOSHUA
Art Unit
2445
Tech Center
2400 — Computer Networks
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
763 granted / 976 resolved
+20.2% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
25 currently pending
Career history
1001
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
28.5%
-11.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 976 resolved cases

Office Action

§102 §103 §112
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in the application. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 25, 2024 and September 4, 2025 are in compliance with the provisions of 37 CFR 1.97, and accordingly, the IDS has been considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “receiver configured to receive a capability query request’ and “a transmitter configured to feed back capability query response” in claim 1. “transmitter configured to transmit capability query request” and “receiver configured to receive capability query response” in claim 19. “terminal equipment configured to receive a capability query request” and “network device configured to transmit the capability query request” in claim 20. The specification describes the receiver as “receiving unit” and transmitter as “receiving unit.” (see p. 3) See MPEP 2181 When the claim limitation does not use the term "means," examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term "means"). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-14, 16-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 2, there is insufficient antecedent basis for “the AI/ML model.” Regarding claim 3, the claim uses the term, “consistent.” Applicant’s intended meaning of the term in view of the claim language is not clear as there is more than meaning of the term “consistent.” The claim recites in part, “whether there exist consistent AI/ML models.” It is not clear whether the language is intended as matching AI/ML models, compatible AI/ML models, or another intended meaning of the term. Claims 6, 12, and 14 also use the term “consistent” and are rejected under a similar rationale as claim 3. Regarding claim 4, there is insufficient antecedent basis for “the AI/ML supported by the terminal equipment, “the terminal equipment,” and “the AI/ML model transmitted by the network device.” Regarding claim 5, there is insufficient antecedent basis for “the terminal equipment.” Regarding claim 6, there is insufficient antecedent basis for “the terminal equipment.” Regarding claim 7, there is insufficient antecedent basis for “the AI/ML model group,” “the model,” and “the AI/ML model.” Regarding claim 9, there is insufficient antecedent basis for “the identification related to the AI/ML model,” and “the AI/ML model.” Regarding claim 10, there is insufficient antecedent basis for “the terminal equipment” and “the AI/ML model.” Regarding claim 11, there is insufficient antecedent basis for “the terminal equipment,” “the AI/ML model,” “the AI/ML model transmitted by the network device,” “the AI/ML model according to the indication information.” Regarding claim 12, there is insufficient antecedent basis for “the terminal equipment,” and “the AI encoder in the network device.” Regarding claim 12, it is not clear which AI encoder and AI decoder, “the AI encoder” and “the AI decoder” are referring to at the end of the claim because the claim comprises multiple recitations of “AI encoder” and “AI decoder.” The claim recites “an AI encoder… in the terminal equipment,” “an AI decoder,” “the AI encoder in the network device,” and “the terminal equipment further has an AI decoder.” Regarding claim 13, it is not clear which AI encoder and AI decoder, “the AI encoder” and “the AI decoder” are referring to in the claim. See rejection of claim 12 above. Regarding claim 13, there is insufficient antecedent basis for “the terminal equipment.” Regarding claim 13, there is insufficient antecedent basis for “the performance monitoring configured by the network device.” While the claim 12 recites “performance monitoring and/or training,” the claim does not comprise any prior step of performance monitoring configured by the network device. Regarding claim 13, there is insufficient antecedent basis for “the training configured by the network device.” The claim does not comprise any prior step of training configured by the network device. Regarding claim 14, there is insufficient antecedent basis for “the terminal equipment,” “the AI encoder in the network device,” and “the AI decoder.” Regarding claim 14, it is not clear which AI encoder “the AI encoder” is referring to because the claim recites, “an AI encoder for channel state information,” “the network device further has an AI encoder consistent with the AI encoder in the terminal equipment.” Regarding claim 16, the claim uses the term, “consistence.” Applicant’s intended meaning of the term in view of the claim language is not clear. In this case, the claim recites in part, “a frequency domain density of the sounding reference signal is in consistence with a frequency domain density of a channel state information reference signal,” “number of resource blocks of the sounding reference signal is in consistence with the number of resource blocks of the channel state information reference signal,” and “the number of resource blocks of the sounding reference signal is in consistence with the number of resource blocks of the channel state information reference signal.” It is not clear what is meant by to be “in consistence” with another element. It is not clear whether the language is intended as “matching” with another element and/or what degree of consistence is required for “consistence.” Regarding claim 18, there is insufficient antecedent basis for “the channel state information reference signal.” Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 4-6, 8-10, 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shen et al. US Patent Publication No. 2022/0342713 (“Shen”). Regarding claim 1, Shen teaches an information transmission apparatus, comprising: a receiver configured to receive a capability query request of AI/ML transmitted by a network device (para. [0080] when the network device requests the terminal to report the AI/ML capability information, the terminal reports the AI/ML capability information.); and a transmitter configured to feed back a capability query response or report to the network device according to the capability query request (para. [0085] terminal sends artificial intelligence (AI)/machine learning (ML) capability information to a network device). Regarding claim 19, Shen teaches an information transmission apparatus, comprising: a transmitter configured to transmit a capability query request of AI/ML to a terminal equipment (para. [0080] when the network device requests the terminal to report the AI/ML capability information, the terminal reports the AI/ML capability information.); and a receiver configured to receive a capability query response or report fed back by the terminal equipment according to the capability query request (para. [0085] terminal sends artificial intelligence (AI)/machine learning (ML) capability information to a network device). Regarding claim 20, Shen teaches a communication system, comprising: a terminal equipment configured to receive a capability query request of AI/ML, and feed back a capability query response or report according to the capability query request (para. [0080] when the network device requests the terminal to report the AI/ML capability information, the terminal reports the AI/ML capability information. terminal reports the AI/ML capability information. para. [0085] terminal sends artificial intelligence (AI)/machine learning (ML) capability information to a network device); and a network device configured to transmit the capability query request of AI/ML, and receive the capability query response or report (para. [0080] when the network device requests the terminal to report the AI/ML capability information. para. [0085] terminal sends artificial intelligence (AI)/machine learning (ML) capability information to a network device). Regarding 2, Shen teaches the apparatus according to claim 1, wherein the capability query request comprises querying at least one of the following: an AI/ML capability; a certain signal processing function; an AI/ML model group identification; an AI/ML model identification; a version identification of the AI/ML model; an update capability of the AI/ML model; a performance monitoring capability or performance evaluation capability of the AI/ML model; a training capability of the AI/ML model; or a storage capability related to update of the AI/ML model (para. [0080] when the network device requests the terminal to report the AI/ML capability information.); wherein the capability query response or report includes at least one of the following: whether an AI/ML capability is supported; whether a certain signal processing function is supported; whether a queried AI/ML model group identification is supported, or, a supported AI/ML model group identification; whether a queried AI/ML model identification is supported, or a supported AI/ML model identification; whether a version identification of a queried AI/ML model is supported, or a version identification of a supported AI/ML model; whether an update capability of the AI/ML model is supported; whether a performance monitoring capability or performance evaluation capability of the AI/ML model is supported; whether a training capability of the AI/ML model is supported; or a storage capability related to update of the AI/ML model (para. [0079] AI/ML capability information indicates the resource information, performance index requirement of wireless transmission, a type of stored training data, AI/ML capability information may further include a serial number of a currently stored AI/ML model. para. [0085] terminal sends artificial intelligence (AI)/machine learning (ML) capability information to a network device). Regarding claim 4, Shen teaches the apparatus according to claim 1, wherein the capability query response or report includes updated capability information and/or an identification of the AI/ML model supported by the terminal equipment, and the receiver further receives update information of the AI/ML model transmitted by the network device (para. [0079] AI/ML capability information may further include a serial number of a currently stored AI/ML mode. para. [0080] terminal can periodically report the AI/ML capability information to the network device. para. [0083] network device can flexibly switch an AI/ML model run by the terminal. distribute a suitable AI/ML model to the terminal); wherein the update information of the AI/ML model includes a parameter or identification of the AI/ML model, and the terminal equipment selects a corresponding AI/ML model according to the parameter or identification, or downloads a corresponding AI/ML model from a core network device or the network device (para. [0089] when the terminal has a great available computing power, a larger AI/ML model can be run by the terminal. when the AI/ML model run by the terminal varies, a model run by the network device also varies. network device can select an AI/ML model suitable for the terminal according to the AI/ML task of the terminal). Regarding claim 5, Shen teaches the apparatus according to claim 1, wherein the capability query response or report includes capability information on whether the terminal equipment supports training, and/or capability information on whether the terminal equipment supports performance evaluation indicated by a network (para. [0079] AI/ML capability information includes… a performance index requirement on wireless transmission of a network side by an AI/ML operation… a type of stored training data). Regarding claim 6, Shen teaches the apparatus according to claim 1, wherein in a case where AI/ML model groups and/or signal processing functions supported by the terminal equipment and the network device are consistent, the receiver further receives an intra-group identification and/or a model identification of the AI/ML model group transmitted by the network device (para. [0088] AI/ML model distributed by the network device to the terminal may include one or more AI/ML models that match the AI/ML capability of the terminal. para. [0125] identity of the AI/ML model needed by the terminal to process the AI/ML service. para. [0147] network device distributes an AI/ML model to the terminal according to a list of existing AI/ML models. network device can select an AI/ML model suitable for the terminal). Regarding claim 8, Shen teaches the apparatus according to claim 1, wherein the receiver receives a message for configuring or activating or enabling an AI/ML model transmitted by the network device, and uses a corresponding AI/ML model according to the message; and/or the receiver receives a message for de-configuring or deactivating or disabling an AI/ML model transmitted by the network device, and stops a corresponding AI/ML model according to the message (para. [0083] network device can flexibly switch an AI/ML model run by the terminal, distribute a suitable AI/ML model to the terminal, adjust the AI/ML training parameters. para. [0127] network device can indicate the terminal to delete an AI/ML model which is not optimal for the AI/ML capability of the terminal). Regarding claim 9, Shen teaches the apparatus according to claim 1, wherein the identification related to the AI/ML model includes at least one of the following: a signal processing function identification, a model group identification, a model identification, a model category identification, a model layer number identification, a model version identification, or a model size or storage size identification (para. [0079] AI/ML capability information may further include a serial number of a currently stored AI/ML model. para. [0181]-[0182] information of the AI/ML model stored in the terminal for the AI/ML service includes any of the following information: a list of AI/ML models stored in the terminal, list of AI/ML models newly added to the terminal). Regarding claim 10, Shen teaches the apparatus according to claim 1, wherein the network device and the terminal equipment have AI/ML models with identical identifications, and the AI/ML model of the network device and the AI/ML model of the terminal equipment have been jointly trained (para. [0089] distributes the AI/ML model. when the AI/ML model run by the terminal varies, a model run by the network device also varies. para. [0090] carries the AI/ML model distributed by the network device to the terminal. “federated learning”, the AI/ML task indication information carries the AI/ML training parameter arranged by the network device for the terminal. para. [0157] obtain a trained AI/ML model). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of Kumar et al. US Patent Publication No. 2022/0150125 (“Kumar”) and Fan US Patent Publication No. 2024/0236826 (“Fan”). Regarding claim 3, Shen does not teach the apparatus according to claim 1, wherein the capability query request includes an AI/ML model group identification and/or a model identification and/or a version identification for a certain signal processing function supported by the network device, and the apparatus further comprising: processor circuitry configured to query whether there exist consistent AI/ML models in a terminal equipment according to the AI/ML model group identification and/or the model identification and/or the version identification supported by the network device, and include positive information of the model group identification and/or the model identification and/or the version identification in the capability query response or report in a case where there exist consistent AI/ML models. Kumar discloses a capability query request that includes an AI/ML model group identification and/or a model identification and/or a version identification, and the apparatus further comprising: processor circuitry configured to query whether there exist consistent AI/ML models in a terminal equipment according to the AI/ML model group identification and/or the model identification and/or the version identification supported by the network device, and include positive information of the model group identification and/or the model identification and/or the version identification in the capability query response or report in a case where there exist consistent AI/ML models (para. [0064] request for the AI model from the example service 405 can include information such as an AI-NF inference with a named model. para. [0116] initiates a query of the model table 350 via the interface 510. query identifies one or more models from the table 350, cache 460, etc., for comparison to one or more requirements/criterion/constraints provided as part of the query. para. [0118] selected AI model (or an instance of the selected AI model) is then made available to the requestor. deploys an instance of the selected model to the requestor and/or makes the selected model available for execution). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Kumar’s disclosure. One of ordinary skill in the art would have been motivated to do so in order to have similarly enabled selection of a model that satisfies a requested model. Fan discloses an AI model for a certain signal processing function (para. [0026] AI model is configured to implement signal modulation and demodulation. AI model is configured to implement encoding and decoding of a signal. para. [0063] first message may carry a service identifier corresponding to at least one type of model data.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen and Kumar with Fan’s disclosure of implementing an AI model for signal processing function. One of ordinary skill in the art would have been motivated to do so because Shen disclose using the models to perform tasks, and it would have been beneficial to provide a model capable of performing additional tasks including encoding and decoding. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of Fan US Patent Publication No. 2024/0236826 (“Fan”). Regarding claim 7, Shen teaches the apparatus according to claim 1, wherein the receiver receives configuration information of the network device for processing, the configuration information including an identification of the AI/ML model group and/or the model, and performs processing by using the AI/ML model corresponding to the identification of the AI/ML model group and/or the model (para. [0092] terminal receives the AI/ML task configuration information sent by the network device. para. [0115]-[0123] AI/ML task configuration information includes. Identity of an AI/ML task, identity of an AI/ML model, an AI/ML model). Shen discloses receives configuration information of the network device but not for a certain signal processing function and performs the signal processing by using the AI/ML model. Fan discloses performs signal processing by using AI/ML model (para. [0026] AI model is configured to implement signal modulation and demodulation. AI model is configured to implement encoding and decoding of a signal. para. [0063] first message may carry a service identifier corresponding to at least one type of model data.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Fan’s disclosure of performing signal processing by using AI/ML model such that the task of Shen further includes task(s) of encoding and decoding of a signal. One of ordinary skill in the art would have been motivated to do so because Shen disclose using the models to perform tasks, and it would have been beneficial to provide a model capable of performing additional tasks including the encoding and decoding. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of Zhang US Patent Publication No. 2023/0247416 (“Zhang”). Regarding claim 11, Shen does not teach the apparatus according to claim 1, wherein in a case where the terminal equipment supports update of the AI/ML model and has an available memory, the receiver receives indication information for transmitting the AI/ML model transmitted by the network device, and receives the AI/ML model according to the indication information; wherein the received AI/ML model includes identification information related to the AI/ML model, an AI/ML model structure and parameter information, wherein the identification information related to the AI/ML model is transmitted via radio resource control signaling or an MAC CE, or is transmitted via a data channel, and the AI/ML model structure and the parameter information are transmitted via a data channel; wherein the indication information includes an AI/ML model identification and/or a version identification, and after receiving the AI/ML model, the transmitter transmits feedback information to the network device, the feedback information including the AI/ML model identification and/or the version identification. Zhang teaches where a terminal equipment supports update of an AI/ML model and has an available memory, a receiver receives indication information for transmitting the AI/ML model transmitted by a network device, and receives the AI/ML model according to the indication information ; wherein the received AI/ML model includes identification information related to the AI/ML model, an AI/ML model structure and parameter information, wherein the identification information related to the AI/ML model is transmitted via radio resource control signaling or an MAC CE, or is transmitted via a data channel, and the AI/ML model structure and the parameter information are transmitted via a data channel (para. [0015] transmit information on a machine learning algorithm and model to a UE. para. [0019] information for indicating that the information on the machine learning algorithm and model is transferred through at least one of a control channel and a data channel. para. [0021] information on at least one of a model structure and a model parameter); wherein the indication information includes an AI/ML model identification and/or a version identification (para. [0140] base station transmits signaling and data information to the UE, transmitting to the UE that a specific model structure and model parameter of the ML algorithm and model used by the UE. para. [0162] information to the base station. para. [0162]-[0168] ML algorithm and model. para. [0240] request to update the ML algorithm and model), and after receiving the AI/ML model, the transmitter transmits feedback information to the network device, the feedback information including the AI/ML model identification and/or the version identification (para. [0149] UE reports state information (e.g., instant state information) on the UE. para. [0154]-[0156] performance indicators of the ML algorithm and model being executed by the UE). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Zhang’s disclosure. One of ordinary skill in the art would have been motivated to do so for similar benefits of providing and updating ML algorithm and model for a certain task or function based on evaluation of key performance indicators. Claims 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. US Patent Publication No. 2024/0313838 (“Wu”). Regarding claim 12, Shen does not teach the apparatus according to claim 1, wherein there exists an AI encoder for channel state information in the terminal equipment, and there exists an AI decoder with an identification and/or a version consistent with that/those of the AI encoder in the network device, and the terminal equipment further has an AI decoder consistent with the AI decoder in the network device, and the terminal equipment performs performance monitoring and/or training via the AI encoder and the AI decoder. Wu discloses an AI encoder for channel state information in a terminal equipment, and an AI decoder with an identification and/or a version consistent with that/those of the AI encoder in the network device, and the terminal equipment further has an AI decoder consistent with the AI decoder in the network device, and the terminal equipment performs performance monitoring and/or training via the AI encoder and the AI decoder (fig. 13, para. [0096] machine learning techniques may be used by the wireless communications system 200 to support CSI compression schemes, which may include training an encoder (e.g., training an auto encoder, evaluating…, training a decoder. para. [0109] base station, UE. training an encoder, decoder. para. [0116] base station 105 may configure a UE 115 with an auto encoder… and an auto decoder (e.g., using an indication 255, to configure a CSI compression evaluation component 260 to perform evaluations…). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Wu’s disclosure. One of ordinary skill in the art would have been motivated to do so in order to have provided compression schemes and utilized machine learning to support the compression schemes. Regarding claim 14, , Shen does not teach the apparatus according to claim 1, wherein there exists an AI encoder for channel state information in the terminal equipment, and there exists an AI decoder with an identification and/or a version consistent with that/those of the AI encoder in the network device, and the network device further has an AI encoder consistent with the AI encoder in the terminal equipment, and the network device performs performance monitoring and/or training via the AI encoder and the AI decoder. Wu discloses an AI encoder for channel state information in a terminal equipment, and an AI decoder with an identification and/or a version consistent with that/those of the AI encoder in the network device, and the network device further has an AI encoder consistent with the AI encoder in the terminal equipment, and the network device performs performance monitoring and/or training via the AI encoder and the AI decoder (fig. 13, para. [0096] machine learning techniques may be used by the wireless communications system 200 to support CSI compression schemes, which may include training an encoder (e.g., training an auto encoder, evaluating…, training a decoder. para. [0109] base station, UE. training an encoder, decoder. para. [0116] base station 105 may configure a UE 115 with an auto encoder… and an auto decoder (e.g., using an indication 255, to configure a CSI compression evaluation component 260 to perform evaluations…). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Wu’s disclosure. One of ordinary skill in the art would have been motivated to do so in order to have provided compression schemes and utilized machine learning to support the compression schemes. Claim 15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of Huang et al. US Patent Publication No. 2024/0121048 (“Huang”). Regarding claim 15, Shen does not teach the apparatus according to claim 1, wherein the receiver receives sounding reference signal configuration transmitted by the network device; and the transmitter transmits a sounding reference signal according to the sounding reference signal configuration. Huang teaches a receiver that receives sounding reference signal configuration transmitted by a network device; and a transmitter transmits a sounding reference signal according to the sounding reference signal configuration (para. [0341] UE sends the corresponding SRS according to the configuration information of the SRS resource sent by the base station. para. [0347] UE receives the configuration information of M uplink SRS resources indicated by the base station, and uses N groups of SRS resources among the M uplink SRS resources to send the SRSs based on the configuration information). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Huang’s disclosure. One of ordinary skill in the art would have been motivated to do so in order to have configured the apparatus for resource allocation and estimating channel quality. Regarding claim 18, Shen does not teach the apparatus according to claim 15, wherein the sounded reference signal is used to obtain downlink channel estimation based on the channel state information reference signal by using channel reciprocity and via uplink channel estimation based on the sounding reference signal. Huang teaches the sounded reference signal used to obtain downlink channel estimation based on the channel state information reference signal by using channel reciprocity and via uplink channel estimation based on the sounding reference signal (para. [0340] uplink SRS can also be used to obtain the Channel State Information (CSI). SRS can be used to estimate the uplink channel information of each UE. by using the reciprocity of the channel, the base station can also obtain the downlink channel state through the SRS. para. [0341] UE sends the corresponding SRS according to the configuration information of the SRS resource sent by the base station). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen with Huang’s disclosure. One of ordinary skill in the art would have been motivated to do so in order to have configured the apparatus for resource allocation and estimating channel quality. Examiner’s Note Claims 13, 16, and 17 have not been rejected on prior art. However, the claims are not allowable in view of the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action. Additional Prior Art The following prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Hong US Patent Publication No. 2023/0232211 (para. [0069] reporting an AI capability of a terminal through a request between a network device and the terminal. terminal may report the AI capability of the terminal based on the UECapabilityInformation signaling) Muhammad et al. US Patent Publication No. 2023/0351248 (para. [0041] initial request for AI/ML capabilities of the user device may be requested or transmitted. Para. [0045] classification of the machine-learning capabilities of the user device may indicate an artificial intelligence model training capacity of the user device) Wang et al. US Patent Publication No. 2025/0016547 (para. [0041] master node may transmit a capability request message, e.g., request_ai_capability to a slave node of the master node. inquires whether AI capability for communication is supported in the slave node. slave node may transmit a capability report message, e.g., report_ai_capability to the master node in step 203. capability report message at least reports whether the AI capability for communication is supported. para. [0043] capability request message may also inquire one or more parameters associated with the AI capability) Elshafi et al. US Patent Publication No. 2024/0171249 (para. [0054] enable the base station to estimate downlink CSI in cases where there is reciprocity between an uplink channel and a downlink channel. use an antenna switching SRS (e.g., an SRS transmitted using a resource of an antenna switching SRS resource set) to perform uplink channel estimation that can then be used as downlink CSI. para. [0075] UE may receive, an SRS configuration that indicates the antenna group structure to be configured at the UE. For example, in some aspects, the SRS configuration may generally indicate one or more SRS resource sets that include one or more time and/or frequency resources in which the UE is to transmit an SRS) Shim et al. US Patent Publication No. 2023/0254196 (para. [0131] estimate the downlink channel state base on the estimated uplink channel state. uplink channel estimation may use the SRS received from the terminal. downlink channel estimation may use the CSI feedback information received from the terminal). Sakhnini et al. US Patent Publication No. 2025/0096959 (para. [0044] receive an indication to update a configuration of a sounding reference signal (SRS) resource. transmit one or more SRSs based at least in part on the updated configuration of the SRS resource) Conclusion A shortened statutory period for reply to this Office action is set to expire THREE MONTHS from the mailing date of this action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joshua Joo whose telephone number is 571 272-3966. The examiner can normally be reached on Monday-Friday 7am-3pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie can be reached on 571 270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA JOO/Primary Examiner, Art Unit 2445
Read full office action

Prosecution Timeline

Oct 25, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603875
CONNECTION ESTABLISHMENT USING SHARED CERTIFICATE IN GLOBAL SERVER LOAD BALANCING (GSLB) ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12587590
SERVER APPARATUS, MANAGEMENT PROGRAM AND MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12580871
RESOURCE DEPLETION DETECTION AND NOTIFICATION IN AN ENTERPRISE FABRIC NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12572647
CONNECTING ADVERSARIAL ATTACKS TO NEURAL NETWORK TOPOGRAPHY
2y 5m to grant Granted Mar 10, 2026
Patent 12572475
COMPACT REPRESENTATION OF TRANSITION SEQUENCES FOR SINGLE-STATE STORAGE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+23.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 976 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month