DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 13 is objected to because of the following informalities:
As to claim 13, line 2, “a receiver” should be replaced with “the receiver”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 5-8, and 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (hereinafter referred to as “Wang”, US 2010/0246720) in view of Eger et al. (hereinafter referred to as “Eger”, US 12,470,953).
As to claims 1 and 14, Wang teaches an apparatus comprising at least one processor, and at least one memory storing instructions that, when executed by the at least one processor (Fig. 2, processor 18, memory 19, paragraphs [0021]-[0022]), are configured to cause the apparatus at least to: provide, as an input, a channel estimation or a plurality of raw pilots (Fig. 3, estimate channel variation 140); obtain, as an output, a label indicating a channel model, wherein the channel model is representative of channel conditions (Fig. 3, classify channel variation based on estimated channel variation 150, paragraph [0036]); based on the label, determine a waveform associated with the channel conditions (Fig. 3, invoke adaptive data rate and transmitting scheduling schemes 160, paragraphs [039]-[0041]); and perform the transmission to the receiver using the waveform (Fig. 2, transmitter 16, paragraphs [0040]-[0041]).
Wang does not expressly teach providing to a machine learning model, as an input, a channel estimation or a plurality of raw pilots; obtaining, from the machine learning model, as an output, a label indicating a channel model; and indicating the waveform to the receiver.
Eger further teaches determining a waveform to be transmitted based on channel conditions or channel estimation (i.e., modified waveform) by using a selected machine learning model, wherein the machine learning model uses channel estimation to determine the waveform to be transmitted (abstract, Fig. 4, steps 425 and 430, column 1, lines 38-67).
It is officially noted that inputting a channel estimation or a plurality of raw pilots to a machine learning model/algorithm to obtain, as an output, a label indicating the channel model that is a representative of channel conditions is obvious in the apparatus and/or method of Wang because using a machine learning model/algorithm (such as CNN, RNN, or ANN) to analyze any information such as channel estimation to generate an output/result such as a label indicating a channel model is what a machine learning model/algorithm can be employed for instead of a (general purpose) computer taught by Wang.
It would have been obvious to one of ordinary skill in the art to provide to a machine learning model, as an input, a channel estimation or a plurality of raw pilots; obtain, from the machine learning model, as an output, a label indicating a channel model in order to increase efficiency and accuracy by automation of complex tasks, and by adapting and improving as the machine learning model is exposed to more data and real-world feedback. The iterative process ensures that model(s) remain relevant and become more accurate over time in dynamic environments such as the one described by Wang.
Eger further teaches indicating the waveform to the receiver (column 1, lines 38-67).
It would have been obvious to one of ordinary skill in the art to indicate the waveform to the receiver to aid in reconstructing (e.g., restoring or recovering) the downlink signal.
As to claim 2, Wang further teaches the channel estimation is based on one or more signals that comprise one or more of the following: at least one reference signal, at least one data signal, or at least one reference signal and at least one data signal (paragraph [0024]).
As to claim 3, Wang further teaches the channel conditions comprise at least one of a channel type and a key performance indicator (i.e., channel type, paragraph [0036]).
As to claim 5, Wang does not expressly teach the input to the machine learning model further comprises at least one parameter defining a requirement for performance of transmission, in particular, one or more of the following: peak to average power ratio, excess bandwidth, or adjacent channel leakage ratio.
Eger further teaches the input to the machine learning model further comprises at least one parameter defining a requirement for performance of transmission, in particular, one or more of the following: peak to average power ratio, excess bandwidth, or adjacent channel leakage ratio (abstract, column 18, lines 11-25).
It would have been obvious to one of ordinary skill in the art that the input to the machine learning model further comprises at least one parameter defining a requirement for performance of transmission, in particular, one or more of the following: peak to average power ratio, excess bandwidth, or adjacent channel leakage ratio in order to improve signal transmissions by reducing PAPR.
As to claim 6, Wang further teaches a set of channel models, wherein the channel model indicated by label is one of the channel models (paragraph [0036]).
Wang does not expressly teach the machine learning model is trained using a set of channel models and the channel model indicated by the label is one of the channel models.
Eger further teaches determining a waveform to be transmitted based on channel conditions or channel estimation (i.e., modified waveform) by using a selected machine learning model, wherein the machine learning model uses channel estimation to determine the waveform to be transmitted (abstract, Fig. 4, steps 425 and 430, column 1, lines 38-67).
It is officially noted that the machine learning model is trained using a set of channel models and the channel model indicated by the label is one of the channel models is obvious in the apparatus and/or method of Wang because using a machine learning model/algorithm (such as CNN, RNN, or ANN) to analyze any information such as channel estimation to generate an output/result such as a label indicating a channel model is what a machine learning model/algorithm can be employed for instead of a (general purpose) computer taught by Wang.
It would have been obvious to one of ordinary skill in the art that the machine learning model is trained using a set of channel models and the channel model indicated by the label is one of the channel models in order to increase efficiency and accuracy by automation of complex tasks, and by adapting and improving as the machine learning model is exposed to more data and real-world feedback such as a set of channel models and the channel model indicated by the label is one of the channel models. The iterative process ensures that model(s) remain relevant and become more accurate over time in dynamic environments such as the one described by Wang.
As to claim 7, Wang further teaches the set of channel models is comprised in a dataset and the dataset defines a waveform associated with an individual channel model comprised in the set of channel models, and wherein the individual channel model represents its associated channel conditions (Wang at least teaches a dataset (i.e., stationary, varied, and highly varied channel models (flowchart shown in Fig. 3, classify channel variation 150, paragraph [0036]). Wang also teaches a correspondence between channel models and selection of optimum transmission schemes (flowchart shown in Fig. 3, step 160, paragraph [0039]), which would mean that there exists a table/dataset).
As to claim 8, Wang further teaches that the dataset (i.e., channel variation type and/or its corresponding optimum transmission schemes is generated onsite (i.e., at the base station, Figs. 2-3, paragraphs [0036] and [0039]).
As to claim 12, Wang further teaches that apparatus is comprised in an access node (Figs. 1-2, base station 10).
As to claim 13, Wang further teaches a receiver for the transmission is comprised in a user equipment (Figs. 1-2, second device/mobile station 20).
As to claim 15, Wang teaches a non-transitory computer-readable medium storing instructions that, when executed by the at least one processor (Fig. 2, processor 18, memory 19, paragraphs [0021]-[0022]), cause an apparatus including the processor to: provide, as an input, a channel estimation or a plurality of raw pilots (Fig. 3, estimate channel variation 140); obtain, as an output, a label indicating a channel model, wherein the channel model is representative of channel conditions (Fig. 3, classify channel variation based on estimated channel variation 150, paragraph [0036]); based on the label, determine a waveform associated with the channel conditions (Fig. 3, invoke adaptive data rate and transmitting scheduling schemes 160, paragraphs [039]-[0041]); and perform the transmission to the receiver using the waveform (Fig. 2, transmitter 16, paragraphs [0040]-[0041]).
Wang does not expressly teach providing to a machine learning model, as an input, a channel estimation or a plurality of raw pilots; obtaining, from the machine learning model, as an output, a label indicating a channel model; and indicating the waveform to the receiver.
Eger further teaches determining a waveform to be transmitted based on channel conditions or channel estimation (i.e., modified waveform) by using a selected machine learning model, wherein the machine learning model uses channel estimation to determine the waveform to be transmitted (abstract, Fig. 4, steps 425 and 430, column 1, lines 38-67).
It is officially noted that inputting a channel estimation or a plurality of raw pilots to a machine learning model/algorithm to obtain, as an output, a label indicating the channel model that is a representative of channel conditions is obvious in the apparatus and/or method of Wang because using a machine learning model/algorithm (such as CNN, RNN, or ANN) to analyze any information such as channel estimation to generate an output/result such as a label indicating a channel model is what a machine learning model/algorithm can be employed for instead of a (general purpose) computer taught by Wang.
It would have been obvious to one of ordinary skill in the art to provide to a machine learning model, as an input, a channel estimation or a plurality of raw pilots; obtain, from the machine learning model, as an output, a label indicating a channel model in order to increase efficiency and accuracy by automation of complex tasks, and by adapting and improving as the machine learning model is exposed to more data and real-world feedback. The iterative process ensures that model(s) remain relevant and become more accurate over time in dynamic environments such as the one described by Wang.
Eger further teaches indicating the waveform to the receiver (column 1, lines 38-67).
It would have been obvious to one of ordinary skill in the art to indicate the waveform to the receiver to aid in reconstructing (e.g., restoring or recovering) the downlink signal.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Eger, and further in view of Ibrahim et al. (hereinafter referred to as “Ibrahim”, US 2024/0305506).
As to claim 4, Wang and Eger do not expressly teach the waveform comprises modulation constellation shape and pulse shape, wherein the pulse shape is for transmit and receive filters.
Ibrahim further teaches an OFDM communication system comprising: transmitting a waveform, wherein the waveform comprises modulation constellation shape and pulse shape, wherein the pulse shape is for transmit and receive filters (Fig. 1, bit-to-symbol mapping, Tx filter, matched filter, paragraph [0066] and [0069]).
It would have been obvious to one of ordinary skill in the art that the waveform comprises modulation constellation shape and pulse shape, wherein the pulse shape is for transmit and receive filters in order to reduce interference and achieve spectral efficiency in OFDM- based communication systems.
Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Eger, and further in view of Trevo et al. (hereinafter referred to as “Trevo”, US 12,407,550).
As to claim 9, Wang further teaches the set of channel models is comprised in a dataset and the dataset defines a waveform associated with an individual channel model comprised in the set of channel models, and wherein the individual channel model represents its associated channel conditions (Wang at least teaches a dataset (i.e., stationary, varied, and highly varied channel models (flowchart shown in Fig. 3, classify channel variation 150, paragraph [0036]). Wang also teaches a correspondence between channel models and selection of optimum transmission schemes (flowchart shown in Fig. 3, step 160, paragraph [0039]), which would mean that there exists a table/dataset).
Wang and Eger do not expressly teach the waveform associated with the channel conditions is identified by an index, and the index is associated with an extended modulation and coding scheme index table, or with a separate modulation and coding scheme index table dedicated for associating the waveform and the channel conditions obtained as the output from the machine learning model, and wherein indicating the waveform to the user equipment comprises indicating the index to the receiver.
Trevo further teaches a waveform associated with channel conditions is identified by an index, and the index is associated with an extended modulation and coding scheme index table, or with a separate modulation and coding scheme index table dedicated for associating the waveform and the channel conditions obtained as the output from the machine learning model, and wherein indicating the waveform to the user equipment comprises indicating the index to the receiver (column 6, table 1, lines 32-67, column 7, lines 1-8, column 9, table 2, lines 27-67, column 10, table 2, column 11, table 3, Fig. 2, 210).
It would have been obvious to one of ordinary skill in the art that the waveform associated with the channel conditions is identified by an index, and the index is associated with an extended modulation and coding scheme index table, or with a separate modulation and coding scheme index table dedicated for associating the waveform and the channel conditions obtained as the output from the machine learning model, and wherein indicating the waveform to the user equipment comprises indicating the index to the receiver in order to improve signal transmissions by determining the optimum transmission scheme and notifying mobile station(s) about said optimum transmission scheme(s).
As to claim 10, Wang and Eger do not expressly teach that the index is indicated to the receiver using one or more additional entries in a modulation and coding scheme set used for link adaptation.
Trevo further teaches that the index is indicated to the receiver using one or more additional entries in a modulation and coding scheme set used for link adaptation (Fig. 2, 210, column 6, table 1, lines 32-67, column 7, lines 1-8, column 9, table 2, lines 27-67, column 10, table 2, column 11, table 3).
It would have been obvious to one of ordinary skill in the art that the index is indicated to the receiver using one or more additional entries in a modulation and coding scheme set used for link adaptation in order to improve signal transmissions by determining the optimum transmission scheme and notifying mobile station(s) about said optimum transmission scheme(s).
Allowable Subject Matter
Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
O’Shea et al., US 11,334,807, abstract, Figs. 1-3
Sakhnini et al., US 2022/0416916, abstract, Figs. 1-9
Hu et al., US 2023/0388158, Figs. 1-10
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRESHTEH N AGHDAM whose telephone number is (571)272-6037. The examiner can normally be reached Monday-Friday 10:30-7:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh M Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRESHTEH N AGHDAM/Primary Examiner, Art Unit 2632
2/16/2026