DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to initial filing on 02/08/2024.
Claim 1-30 are currently pending and have been considered below.
Drawings
The drawings were received on 02/08/2024. These drawings are reviewed and accepted by the Examiner.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/08/2024 and 06/18/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by applicant’s submission of prior art Seo et al (US 20210119681).
Regarding claim 1, Seo discloses an apparatus for wireless communication at a first wireless device (Figure1 30), comprising:
a processor (Figure 2, 150); memory (Figure 12, 200) coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to:
receive, at the first wireless device from a network node, a machine learning model for use by the first wireless device to detect remote interference from a base station ([0023] signal interference estimation is performed using machine learning; [0027]… and determines interference parameters corresponding to the interference signal based on an output vector of the at least one machine learning model; [0116] The trained machine learning model executed by the AI accelerator 250 … may be updated based on data provided from the outside of the apparatus 200 while the apparatus 200 is used), wherein the first wireless device is associated with a first cell and the base station is associated with a second cell different from the first cell ([0024] communications at a terminal may be interrupted by an interference signal (e.g., from a neighboring signal));
input, by the first wireless device, one or more parameters into the machine learning model ([0023] signal interference estimation is performed using machine learning); and detect, by the first wireless device, whether the remote interference from the base station is present based at least in part on an output of the machine learning model, the output based at least in part on the one or more parameters input into the machine learning model ([0023] signal interference estimation is performed using machine learning; [0027] … and determines interference parameters corresponding to the interference signal based on an output vector of the at least one machine learning model; [0037] the UE 30 may estimate the interference parameters based on machine learning. Therefore, an exhaustive search for estimating the interference parameters may be omitted, and, in spite of the increase in the interference parameters, interference may be efficiently estimated; [0072] the at least one first processor 150 may provide the input vector IN generated based on at least one of the received signal vector y.sup.(k) and the serving channel matrix h.sub.S.sup.(k) and the interference channel matrix H.sub.I.sup.(k) to the second processor 170 and may determine the interference rank r.sub.I, the interference TPR ρ.sub.I, and the interference precoding matrix P.sub.I based on the output vector OUT provided by the second processor 170).
Regarding claim 2, Seo discloses the apparatus of claim 1, wherein the instructions are further executable by the processor to cause the apparatus to: transmit, to the network node and after detecting whether the remote interference from the base station is present, one or more indications that indicate the one or more parameters input into the machine learning model, the output of the machine learning model ([0023] signal interference estimation is performed using machine learning; [0027] … and determines interference parameters corresponding to the interference signal based on an output vector of the at least one machine learning model; [0037] the UE 30 may estimate the interference parameters based on machine learning. Therefore, an exhaustive search for estimating the interference parameters may be omitted, and, in spite of the increase in the interference parameters, interference may be efficiently estimated), or any combination thereof.
Regarding claim 3, Seo discloses the apparatus of claim 2, wherein the instructions are further executable by the processor to cause the apparatus to: receive, after detecting whether the remote interference from the base station is present, a reference signal from the base station based at least in part on transmitting the one or more indications (0035] In some embodiments, the UE 30 may report information on interference, for example, the estimated interference parameters to the serving BS 12 and the serving BS 12 may increase communication performance based on the estimated interference. For example, the serving BS 12 may control contention and scheduling based on the information provided by the UE 30); and determine whether the output of the machine learning model is accurate based at least in part on the reference signal (0035] In some embodiments, the UE 30 may report information on interference, for example, the estimated interference parameters to the serving BS 12 and the serving BS 12 may increase communication performance based on the estimated interference. For example, the serving BS 12 may control contention and scheduling based on the information provided by the UE 30. In some embodiments, the serving BS 12 may reduce a rank of the MIMO for the UE 30 in a state in which the effect of interference is high like the UE 30 positioned at a cell edge illustrated in FIG. 1.).
Regarding claim 4, Seo discloses the apparatus of claim 3, wherein the instructions are further executable by the processor to cause the apparatus to: transmit, to the network node after determining whether the output of the machine learning model is accurate, an indication of whether the output of the machine learning model is accurate ([0034] the UE 30 may be used to correctly and efficiently estimate the interference parameters that fluctuate rapidly from the received signal. [0035] n some embodiments, the UE 30 may report information on interference, for example, the estimated interference parameters to the serving BS 12 and the serving BS 12 may increase communication performance based on the estimated interference. For example, the serving BS 12 may control contention and scheduling based on the information provided by the UE 30. In some embodiments, the serving BS 12 may reduce a rank of the MIMO for the UE 30 in a state in which the effect of interference is high like the UE 30 positioned at a cell edge illustrated in FIG. 1.).
Regarding claim 5, Seo discloses the apparatus of claim 1, wherein the instructions are further executable by the processor to cause the apparatus to: receive, from the network node, an updated version of the machine learning model, the updated version of the machine learning model based at least in part on the one or more parameters input into the machine learning model, the output of the machine learning model, one or more second parameters input into respective machine learning models implemented at one or more second wireless devices, one or more respective second outputs of the respective machine learning models obtained by the one or more second wireless devices ([0116] For example, the at least one core 210 and/or the hardware accelerator 270 may generate the input vector IN and may provide the generated input vector IN to the A accelerator 250. The A accelerator 250 may provide the output vector OUT corresponding to the input vector IN to the at least one core 210 and/or the hardware accelerator 270 by executing the at least one machine learning model trained by the plurality of sample input vectors and the plurality of sample interference parameters. The trained machine learning model executed by the AI accelerator 250 may be implemented during the manufacturing of the apparatus 200 and may be updated based on data provided from the outside of the apparatus 200 while the apparatus 200 is used.), or any combination thereof.
Regarding claim 6, Seo discloses the apparatus of claim 1, wherein the instructions are further executable by the processor to cause the apparatus to: select a time resource, a frequency resource, or both for an uplink transmission based at least in part on detecting whether the remote interference from the base station is present ([0103] In the time domain and/or the frequency domain, resource elements RE close to each other may have common interference parameters. Hereinafter, as described later with reference to FIGS. 10 and 11, the interference parameters may be estimated based on a series of resource elements RE close to each other in the time domain and/or the frequency domain).
Regarding claim 7, Seo discloses the apparatus of claim 1, wherein the one or more parameters comprise an energy waveform parameter for a signal received at the first wireless device over a duration, a date, a time, an uplink reception rate in one or more uplink symbols, a location of the first wireless device, a weather condition, a frequency resource or a time resource corresponding to a failed uplink transmission (([0103] In the time domain and/or the frequency domain, resource elements RE close to each other may have common interference parameters. Hereinafter, as described later with reference to FIGS. 10 and 11, the interference parameters may be estimated based on a series of resource elements RE close to each other in the time domain and/or the frequency domain), or any combination thereof.
Regarding claim 8, Seo discloses the apparatus of claim 7, wherein: the one or more parameters comprises the energy waveform parameter for the signal ([0047] The at least one machine learning model 175 may be in a state trained by a plurality of sample input vectors and a plurality of sample interference parameters (e.g., corresponding to a rank, transmission power, and/or a precoding matrix of an interfering signal)); and the duration comprises one or more symbols between a first time period for downlink signaling within the first cell and a next time period for downlink signaling within the first cell ([0101 A minimum transmission unit in the time domain may be an OFDM symbol, N.sub.symb OFDM symbols may form one slot, and two slots may form one sub-frame).
Regarding claim 9 Seo discloses the apparatus of claim 7, wherein the one or more parameters comprises the energy waveform parameter for the signal, the energy waveform parameter comprising a slope of received power for the signal, an initial received power for the signal ([0047] The second processor 170 may execute at least one machine learning model 175.),or both. The at least one machine learning model 175 may be in a state trained by a plurality of sample input vectors and a plurality of sample interference parameters (e.g., corresponding to a rank, transmission power, and/or a precoding matrix of an interfering signal)
Regarding claim 10, Seo discloses the apparatus of claim 1, wherein the output of the machine learning model comprises an indication of whether the remote interference from the base station is present, one or more identifiers associated with one or more base stations causing remote interference, a distance between the first wireless device and the base station, a direction of the remote interference from the base station, a quantity of the one or more base stations causing remote interference ([0109] the at least one first processor 150 may identify a combination of the values of the interference parameters based on ratings included in the one output vector OUT′ and may determine the interference parameters in accordance with the identified combination), or any combination thereof.
Regarding claim 11, Seo discloses the apparatus of claim 1, wherein the first wireless device comprises a base station that provides service within the first cell or a user equipment (UE) communicating within the first cell (generating the serving channel matrix based on a reference signal provided by a base station providing the serving signal, claim 16; [0011] FIG. 1 is a view illustrating a wireless communication system including a user equipment (UE) and a base station ).
Claim 12 contains subject matter similar to claim 1, and thus, is rejected under similar rationale. (Seo, Figure 12, 200, apparatus).
Claim 13 contains subject matter similar to claim 3, and thus, is rejected under similar rationale.
Claim 14 contains subject matter similar to claim 5, and thus, is rejected under similar rationale.
Claim 15 contains subject matter similar to claim 5, and thus, is rejected under similar rationale.
Claim 16 contains subject matter similar to claim 5, and thus, is rejected under similar rationale.
Regarding claim 17, Seo discloses the apparatus of claim 15, wherein a first portion of the machine learning model is for a set of wireless devices that comprises the first wireless device, the one or more second wireless devices, and one or more additional wireless devices and a second portion of the machine learning model is for a subset of the set of wireless devices, the subset comprising the first wireless device and the one or more second wireless devices ([0061] An RBM is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. Specifically, an RBM is a Boltzmann machine with the restriction that neurons must form a bipartite graph (i.e., a pair of nodes from each of the two groups of units that have a symmetric connection between them ; and there are no connections between nodes within a group. By contrast, “unrestricted” Boltzmann machines may have connections between hidden units) .
([0109] the at least one first processor 150 may identify a combination of the values of the interference parameters based on ratings included in the one output vector OUT′ and may determine the interference parameters in accordance with the identified combination)
Regarding claim 18, Seo discloses the apparatus of claim 17, wherein the output of the machine learning model obtained by the first wireless device and the one or more respective second outputs of the respective machine learning models obtained by the one or more second wireless devices are each associated with the first portion of the machine learning model and each comprise an identifier associated with the base station ([0109] the at least one first processor 150 may identify a combination of the values of the interference parameters based on ratings included in the one output vector OUT′ and may determine the interference parameters in accordance with the identified combination).
Regarding claim 19, Seo discloses the apparatus of claim 17, wherein the instructions are further executable by the processor to cause the apparatus to:
receive, from the one or more additional wireless devices, signaling that indicates one or more third parameters input into respective machine learning models implemented at the one or more additional wireless devices, one or more respective third outputs of the respective machine learning models obtained by the one or more additional wireless devices ([0116] For example, the at least one core 210 and/or the hardware accelerator 270 may generate the input vector IN and may provide the generated input vector IN to the A accelerator 250. The A accelerator 250 may provide the output vector OUT corresponding to the input vector IN to the at least one core 210 and/or the hardware accelerator 270 by executing the at least one machine learning model trained by the plurality of sample input vectors and the plurality of sample interference parameters. The trained machine learning model executed by the AI accelerator 250 may be implemented during the manufacturing of the apparatus 200 and may be updated based on data provided from the outside of the apparatus 200 while the apparatus 200 is used), or any combination thereof, wherein, to determine the updated version of the machine learning model, the instructions are executable by the processor to cause the apparatus to:
determine an updated version of the first portion of the machine learning model based at least in part on the one or more third parameters input into the respective machine learning models implemented at the one or more additional wireless devices, the one or more respective third outputs of the respective machine learning models obtained by the one or more additional wireless devices ([0116] The A accelerator 250 may provide the output vector OUT corresponding to the input vector IN to the at least one core 210 and/or the hardware accelerator 270 by executing the at least one machine learning model trained by the plurality of sample input vectors and the plurality of sample interference parameters. The trained machine learning model executed by the AI accelerator 250 may be implemented during the manufacturing of the apparatus 200 and may be updated based on data provided from the outside of the apparatus 200 while the apparatus 200 is used), or any combination thereof; and determine an updated version of the second portion of the machine learning model independent of the one or more third parameters input into the respective machine learning models implemented at the one or more additional wireless devices, the one or more respective third outputs of the respective machine learning models obtained by the one or more additional wireless devices ([0116] The apparatus 200 may perform the method of estimating interference according to an exemplary embodiment of the inventive concept and may be referred to as an apparatus for estimating interference. For example, the at least one core 210 and/or the hardware accelerator 270 may perform operations performed by the at least one first processor 150 of FIG. 2 and the A accelerator 250 may perform operations of the second processor 170 of executing the at least one machine learning model 175. For example, the at least one core 210 and/or the hardware accelerator 270 may generate the input vector IN and may provide the generated input vector IN to the A accelerator 250. The A accelerator 250 may provide the output vector OUT corresponding to the input vector IN to the at least one core 210 and/or the hardware accelerator 270 by executing the at least one machine learning model trained by the plurality of sample input vectors and the plurality of sample interference parameters), or any combination thereof; transmit the updated version of the first portion of the machine learning model to each wireless device of the set of wireless devices; and transmit the updated version of the second portion of the machine learning model to each wireless device of the subset of the set of wireless devices ([0116] the at least one core 210 and/or the hardware accelerator 270 may generate the input vector IN and may provide the generated input vector IN to the A accelerator 250. The A accelerator 250 may provide the output vector OUT corresponding to the input vector IN to the at least one core 210 and/or the hardware accelerator 270 by executing the at least one machine learning model trained by the plurality of sample input vectors and the plurality of sample interference parameters. The trained machine learning model executed by the AI accelerator 250 may be implemented during the manufacturing of the apparatus 200 and may be updated based on data provided from the outside of the apparatus 200 while the apparatus 200 is used).
Claim 20 contains subject matter similar to claim 7, and thus, is rejected under similar rationale.
Claim 21 contains subject matter similar to claim 10, and thus, is rejected under similar rationale.
Claim 22 contains subject matter similar to claim 1, and thus, is rejected under similar rationale. (Seo, Abstract: “disclosure relates to an interference estimation method and apparatus”).
Claim 23 contains subject matter similar to claim 2, and thus, is rejected under similar rationale.
Claim 24 contains subject matter similar to claim 13, and thus, is rejected under similar rationale.
Claim 25 contains subject matter similar to claim 5, and thus, is rejected under similar rationale.
Claim 26 contains subject matter similar to claim 6, and thus, is rejected under similar rationale.
Claim 27 contains subject matter similar to claim 1, and thus, is rejected under similar rationale. (Seo, Abstract: “disclosure relates to an interference estimation method and apparatus”).
Claim 28 contains subject matter similar to claim 13, and thus, is rejected under similar rationale.
Claim 29 contains subject matter similar to claim 14, and thus, is rejected under similar rationale.
Claim 30 contains subject matter similar to claim 15, and thus, is rejected under similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIO R PEREZ whose telephone number is (571)272-7846. The examiner can normally be reached 10Am - 6PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kathy Wang-Hurst can be reached at 5712705371. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JULIO R PEREZ/Primary Examiner, Art Unit 2644