DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 6-8, 10-12, 16-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (Pub. No.: US 2022/0237890 A1) and further in view of OREKONDY (Pub. No.: US 2023/0155704 A1).
With respect to claim 1:
CHOI discloses a method for processing radio frequency (RF) signals, the method comprising obtaining, from the one or more RF signals, a plurality of unlabeled data samples (parag. 0071 and fig. 2); generating an input tensor representation of the plurality of unlabeled data samples (parag. 0138); pretraining a first machine learning network using the input tensor representation to obtain one or more embeddings (fig. 2, item 221 is the first machine learning network); and training a second machine learning network using the one or more embeddings, wherein the second machine learning network is configured to perform one or more signal processing tasks (fig. 2, item 231 and parag. 0073).
CHOI does not explicitly disclose receiving one or more RF signals from one or more antenna channels;
OREKONDY discloses receiving one or more RF signals from one or more antenna channels (fig. 2, item 210, parag. 0008-0009);
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to utilize the teaching of OREKONDY into the teaching of CHOI in order to improve wireless channel modeling.
With respect to claims 2, 12:
CHOI discloses the method of claim 1, wherein pretraining the first machine learning network using the input tensor representation comprises causing the first machine learning network to perform at least one of: tensor reconstruction, channel in-painting, time-channel ordering, de-noising, Simple framework for Contrastive Learning of Visual Representations (SimCLR), contrastive predictive coding, Barlow twins, or array covariance matrix estimation (parag. 0136 discloses SimCLR).
With respect to claims 6, 16:
CHOI discloses the method of claim 1, wherein the latent representation has less dimensionality than the input tensor representation (parag. 0073-0074).
With respect to claims 7, 17:
CHOI discloses the method of claim 1, wherein the first machine learning network is pretrained using self- supervised learning (parag. 0061).
With respect to claims 8, 18:
OREKONDY discloses the method of claim 1, wherein the one or more signal processing tasks comprise at least one of: beamforming weight detection, bandwidth regression, blind channel detection, signal detection from noise, joint signal detection, interference detection, signal classification, direction-of-arrival estimation, or channel estimation (parag. 0036 discloses interference detection and signal noise).
With respect to claims 10, 20:
CHOI discloses the method of claim 1, wherein the input tensor representation comprises at least one of a first dimension representing grouping of the plurality of unlabeled data samples, a second dimension representing the one or more antenna channels, a third dimension representing sampling times, or a fourth dimension representing one or more quadrature channels (parag. 0073, 0074).
With respect to claim 11:
CHOI discloses one or more processors (parag. 0025 discloses a processor) configured to perform operations comprising: obtaining, from the one or more RF signals, a plurality of unlabeled data samples (parag. 0071 and fig. 2); generating an input tensor representation of the plurality of data samples (parag. 0138); pretraining a first machine learning network using the input tensor representation to obtain one or more embeddings (fig. 2, item 221 is the first machine learning network); and training a second machine learning network using the one or more embeddings, wherein the second machine learning network is configured to perform one or more signal processing tasks (fig. 2, item 231 and parag. 0073);
CHOI does not explicitly disclose antenna array comprising a plurality of antenna elements, the antenna array configured to receive one or more RF signals from one or more communication channels corresponding to the plurality of antenna elements;
OREKONDY discloses antenna array comprising a plurality of antenna elements, the antenna array configured to receive one or more RF signals from one or more communication channels corresponding to the plurality of antenna elements (parag. 0034 discloses that receiver and transmitter may have multiple antennas thereby enabling transmission and reception)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to utilize the teaching of OREKONDY into the teaching of CHOI in order to improve wireless channel modeling.
Allowable Subject Matter
Claims 3, 4, 5, 9, 13, 14, 15, 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AJIBOLA A AKINYEMI whose telephone number is (571)270-1846. The examiner can normally be reached Monday-Friday 8:00am-5:00pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, YUWEN PAN can be reached at (571)-272-7855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AJIBOLA A AKINYEMI/Primary Examiner, Art Unit 2649