DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 10-11, 20-21, 32-34, 40-41, and 49-50 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhu et al. (WO 2023/206380 A1, hereinafter “Zhu”).
Regarding claims 1, 32, 40, 49, Zhu discloses a user equipment (UE) (see Figures 5, 6, 8, UE 104) comprising: One or more memories storing processor-executable code; and individually or collectively operable to execute the code to cause the UE to (see Figure 23): receive monitoring input data from an artificial intelligence (AI) or machine learning (ML) service (see Figure 17, step 1704, para. 0151), training data request that is configured to communicate with the UE and a network entity of a radio access network (RAN) (see Figure 5, para, 107-109, both vendors interact to provide an AI service to the network); transmit a monitoring report to the AI or ML service based at least in part on the monitoring input data provided by the AI or ML service (see Figure 17, step 1722, para. 0154, UE 104 sends a training data report to UE vendor 502) and a report configuration of the UE, the monitoring report comprising feedback information associated with first inference or model of the AI or ML service (see Figure 17, steps 1714 and 1720, para. 0152-0153); and communicate one or messages with the network entity of the RAN using a second inference or model of the AI or ML in accordance with the monitoring report provided by the UE (see Figures 19 and 17, these two figures provide one example of different models are used in communication between RAN and UEs for different problems and radio/load constraints).
Regarding claims 2, 11, 33, and 41, Zhu discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: receive second monitoring input data from the network entity of the RAN that is configured to communicate with the UE and the AI or ML service, wherein the monitoring report is based at least in part on the second monitoring input data (see Figure 17, step 1720, data collection by UE from gNB or RAN and sending a training data report step 1722 to UE vendor or AI/ML service).
Regarding claims 3, 21, 34, and 50, Zhu discloses wherein, to transmit the monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the UE to: transmit the monitoring report that indicates at least one of a minimum mean square error (MMSE) threshold, latency data, network loading information, uplink or downlink throughout information, packet loss data, or radio link failure (RLF) rate information associated with the first inference or model of the Al or ML service (see para. 0115, For example, metadata from gNB 102 may include gNB antenna configuration, CSI-RS beam configuration, etc. Further, gNB 102 metadata may be provided to the UE 104 in terms of “gNB meta-ID”, but without revealing any gNB implementation. Metadata from UE 104 may include UE antenna configuration, SNR, RSRP, delay spread, average delay, time stamp, etc. UE 104 may decompose the UE metadata into “UE meta-ID” that UE vendor does not desire to disclose to gNB vendor 504, and rest of the UE metadata).
Regarding claim 10, Zhu discloses a network entity (see Figure 8, gNB 102), comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the network entity to (see Figure 22): receive monitoring input data from an artificial intelligence (AI) or machine learning (ML) service that is configured to communicate with the network entity and a user equipment (UE) (see Figure 17, step 1704, para. 0151); transmit a monitoring report to the AI or ML service based at least in part on the monitoring input data provided by the AI or ML service (see Figure 17, step 1722, para. 0154, UE 104 sends a training data report to UE vendor 502) and a reporting configuration of the network entity, the monitoring report comprising feedback information associated with a first inference or model of the AI or ML service (see Figure 17, steps 1714 and 1720, para. 0152-0153); and communicate one or more messages with the UE using a second inference or model of the AI or ML service in accordance with the monitoring report provided by the network entity (see Figures 19 and 17, these two figures provide one example of different models are used in communication between RAN and UEs for different problems and radio/load constraints).
Regarding claim 20, Zhu discloses an artificial intelligence (AI) or machine learning (ML) service, comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the artificial intelligence (AI) or machine learning (ML) service to: transmit monitoring input data to at least one of a user equipment (UE) or a network entity of a radio access network (RAN) (see Figure 17, step 1704, para. 0151); receive, from the at least one of the UE or the network entity, at least one monitoring report based at least in part on the monitoring input data provided by the AI or ML service (see Figure 17, step 1722, para. 0154, UE 104 sends a training data report to UE vendor 502) and a reporting configuration of the UE or the network entity, the at least one monitoring report comprising feedback information associated with a first inference or model of the AI or ML service (see Figure 17, steps 1714 and 1720, para. 0152-0153); and perform one or more lifecycle management (LCM) operations associated with the first inference or model of the AI or ML service in accordance with the at least one monitoring report provided by one or both of the UE or the network entity (see Figures 19 and 17, these two figures provide one example of different models are used in communication between RAN and UEs for different problems and radio/load constraints).
Regarding claims 5, 36, and 55, Zhu discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: transmit first AI or ML input data to the AI or ML service (see para. 0021, training data report); and receive, from the AI or ML service, Al or ML output data generated based at least in part on the first AI or ML input data and comprising positioning data or feedback information associated with the first inference or model of the AI or ML service (see para. 0021, CSI feedback models).
Regarding claims 17, and 47, Zhu discloses wherein the second inference or model of the AI or ML service comprises the first inference or model of the AI or ML service (see para. 0154, at step 1728, model/data repository 504 performs a model update upon receiving the model training report).
Regarding claims 24, and 53, Zhu discloses wherein, to receive the at least one monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the artificial intelligence (AI) or machine learning (ML) service to: receive the at least one monitoring report from one or both of the UE or the network entity in accordance with one or more periodic reporting criteria associated with the reporting configuration of the UE or the network entity (see para. 0093, scheduling reporting).
Regarding claim 26, Zhu discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the artificial intelligence (AI) or machine learning (ML) service to: receive first AI or ML input data from the UE; and transmit, to the UE, AI or ML output data generated based at least in part on the first Al or ML input data and comprising channel state information (CSI), positioning data, or feedback information generated using a second inference or model of the AI or ML service (see para. 0021, CSI feedback models).
Regarding claims 29, 45, 46, and 58, Zhu discloses wherein, to perform the one or more LCM operations, the one or more processors are individually or collectively operable to execute the code to cause the artificial intelligence (AI) or machine learning (ML) service to: transmit one or more control messages associated with updating or reconfiguring the first inference or model of the AI or ML service based at least in part on the monitoring input data (see para. 0154, step 1728, model/data repository 504 performs a model update upon receiving the model training report).
Regarding claims 30, and 59, Zhu discloses wherein the AI or ML service runs on one or more processors of a network node that is separate from the RAN (see Figures 5,7,16-17,19-20).
Regarding claims 31, and 60, Zhu discloses wherein the AI or ML service runs on one or more processors of the UE that is capable of performing one or more AI or ML functions (see Figure 5, para, 107-109, both vendors interact to provide an AI service to the network).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 4, 6-9, 12-16, 18-19, , 22-23, 25, 27-28, 35, 37-39, 42-44, 48, 51-52, 54 and 56-57 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Qualcomm, Inc., General Aspects of AI/ML Framework Discussion/Decision, 3GPP TSG RAN WG1 #109-e, R1-2205023, e-meeting 5/9-5/20, 2022, hereinafter “D1”).
Regarding claims 4, 22, 23, 35, 51, and 52,, 35 Zhu discloses all the subject matter but fails to mention wherein, to transmit the monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the UE to: transmit the monitoring report to the AI or ML service based at least in part on the monitoring input data satisfying one or more event-based trigger conditions associated with the reporting configuration of the UE. However, D1 from a similar field of endeavor discloses wherein, to transmit the monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the UE to: transmit the monitoring report to the AI or ML service based at least in part on the monitoring input data satisfying one or more event-based trigger conditions associated with the reporting configuration of the UE (see page 9, section 6.2, trigger to address performance issues). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 trigger conditions into Zhu monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claims 6, 27, 28, 37, 38, 56, and 57, Zhu discloses all the subject matter wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: transmit, to the AI or ML service, a lifecycle management (LCM) control indication comprising a request to deactivate, switch, revert, or reconfigure a current model of the AI or ML service based at least in part on the monitoring input data. However, D1 from a similar field of endeavor discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: transmit, to the AI or ML service, a lifecycle management (LCM) control indication comprising a request to deactivate, switch, revert, or reconfigure a current model of the AI or ML service based at least in part on the monitoring input data (see page 9, section 6.2., performance monitoring, activation, deactivation, switching). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 trigger conditions into Zhu monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 7, Zhu discloses all the subject matter but fails to mention wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: receive, from the AI or ML service, LCM control signaling associated with deactivating, switching, reverting, or reconfiguring the current model of the Al or ML service in accordance with the LCM control signaling. However, D1 from a similar field of endeavor discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: receive, from the AI or ML service, LCM control signaling associated with deactivating, switching, reverting, or reconfiguring the current model of the Al or ML service in accordance with the LCM control signaling (see page 9, section 6.2., performance monitoring, activation, deactivation, switching). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 trigger conditions into Zhu monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance
Regarding claims 8, 39, and 48, Zhu discloses all the subject matter but fails to mention wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: receive control signaling that indicates the reporting configuration of the UE, wherein transmitting the monitoring report is based at least in part on the control signaling. However, D1 from a similar field of endeavor discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: receive control signaling that indicates the reporting configuration of the UE, wherein transmitting the monitoring report is based at least in part on the control signaling (see section 7.3, pages 11 and 12, signaling for performance monitoring; see page 9, section 6.2., performance monitoring, activation, deactivation, switching). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 9, Zhu discloses all the subject matter but fails to mention wherein the one or more messages are communicated with the network entity of the RAN based at least in part on one or more communication parameters, the one or more communication parameters based at least in part on the second inference or model of the AI or ML service. However, D1 from a similar field of endeavor discloses wherein the one or more messages are communicated with the network entity of the RAN based at least in part on one or more communication parameters, the one or more communication parameters based at least in part on the second inference or model of the AI or ML service (see section 7.3, pages 11-12, indication of performance monitoring results to UE or UE vendor). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claims 12, 42, 43, and 44, Zhu discloses all the subject matter but fails to mention monitoring report, the one or more processors are individually or collectively operable
to execute the code to cause the network entity to: transmit the monitoring report that triggers deactivation of the first inference or model of the AI or ML service when the feedback information indicates that a performance of the first inference or model is below a threshold. However, D1 from a similar field of endeavor discloses monitoring report, the one or more processors are individually or collectively operable
to execute the code to cause the network entity to: transmit the monitoring report that triggers deactivation of the first inference or model of the AI or ML service when the feedback information indicates that a performance of the first inference or model is below a threshold (see section 4, page 5). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 13, Zhu discloses all the subject matter but fails to mention wherein, to transmit the monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the network entity to: transmit the monitoring report that triggers a switch from the first inference or model of the AI or ML service to the second inference or model of the AI or ML service when the feedback information indicates that a performance of the first inference or model is below a threshold. However, D1 from a similar field of endeavor discloses to transmit the monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the network entity to: transmit the monitoring report that triggers a switch from the first inference or model of the AI or ML service to the second inference or model of the AI or ML service when the feedback information indicates that a performance of the first inference or model is below a threshold (see section 4, page 5). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 14, Zhu discloses all the subject matter but fails to mention a processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, AI or ML output data generated using the second inference or model of the AI or ML service. However, D1 from a similar field of endeavor discloses a processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, AI or ML output data generated using the second inference or model of the AI or ML service (see section 4, page 5). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 15, Zhu discloses all the subject matter but fails to mention wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, lifecycle management (LCM) control signaling that indicates a reconfiguration from an AI or ML-based model to a non-AI or ML-based model, wherein the reconfiguration is based at least in part on the monitoring report. However, D1 from a similar field of endeavor discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, lifecycle management (LCM) control signaling that indicates a reconfiguration from an AI or ML-based model to a non-AI or ML-based model, wherein the reconfiguration is based at least in part on the monitoring report (see section 6.2, page 9). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 16, Zhu discloses all the subject matter but fails to mention wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, lifecycle management (LCM) control signaling that indicates a reconfiguration from a non-AI or ML-based model to an AI or ML-based model, wherein the reconfiguration is based at least in part on the monitoring report. However, D1 from a similar field of endeavor discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, lifecycle management (LCM) control signaling that indicates a reconfiguration from a non-AI or ML-based model to an AI or ML-based model, wherein the reconfiguration is based at least in part on the monitoring report (see section 6.2, page 9). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 18, Zhu discloses all the subject matter but fails to mention wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, control signaling that indicates the reporting configuration of the network entity, wherein transmitting the monitoring report is based at least in part on the control signaling. However, D1 from a similar field of endeavor discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: receive, from the AI or ML service, control signaling that indicates the reporting configuration of the network entity, wherein transmitting the monitoring report is based at least in part on the control signaling (see section 7.3, pages 11 and 12). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claim 19, Zhu discloses all the subject matter but fails to mention wherein the one or more messages are communicated with the UE based at least in part on one or more communication parameters, the one or more communication parameters based at least in part on the second inference or model of the AI or ML service. However, D1 from a similar field of endeavor discloses wherein the one or more messages are communicated with the UE based at least in part on one or more communication parameters, the one or more communication parameters based at least in part on the second inference or model of the AI or ML service (see section 7.3, pages 11 and 12). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Regarding claims 25, and 54, Zhu discloses all the subject matter but fails to mention wherein, to receive the at least one monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the artificial intelligence (AI) or machine learning (ML) service to: receive respective monitoring reports from the UE and the network entity; and perform one or more LCM operations associated with the first inference or model of the AI or ML service based at least in part on the respective monitoring reports provided by the UE and the network entity. However, D1 from a similar field of endeavor discloses wherein, to receive the at least one monitoring report, the one or more processors are individually or collectively operable to execute the code to cause the artificial intelligence (AI) or machine learning (ML) service to: receive respective monitoring reports from the UE and the network entity; and perform one or more LCM operations associated with the first inference or model of the AI or ML service based at least in part on the respective monitoring reports provided by the UE and the network entity (see section 5, pages 5 and 6). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to include D1 signaling into Zhu performance monitoring report. The method can be implemented in a report. The motivation of doing this is to ensure robust on-device model performance.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ANWAR whose telephone number is (571)270-5641. The examiner can normally be reached M-F 6-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Asad Nawaz can be reached at 571-272-3988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MOHAMMAD S. ANWAR
Primary Examiner
Art Unit 2463
/MOHAMMAD S ANWAR/Primary Examiner, Art Unit 2463