Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to Applicant’s preliminary amendment filed October 25, 2024. Claims 16-27 and 33-52 have been cancelled. Claims 1, 3, 4, 7-12, 14, 15, 28 and 30-32 have been amended. Claims 1-15 and 28-32 are pending.
Information Disclosure Statement
The IDS filed 10/25/2024 has been considered.
Claim Objections
Claims 1 and 28 are objected for minor informality. The claim recites in part “the one or more indications of ML model support information indicating at least one of; one or more model indicators.” There is a punctuation error. It is assumed the claim is to disclose a comma instead of a semicolon.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 7- 8, 11-15, and 28-30 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ren et al. (WO 2022077202, hereinafter referred to as “Ren”).
Regarding claim 1, Ren teaches a method performed by a user equipment (UE) for obtaining a configuration for a machine learning (ML) model ([0083] "network may configure the ML processing model for the UE and indicate to the UE which ML processing model should be used for the application"), the method comprising:
providing one or more indications of ML model support information to a network node that describes one or more ML models available at the UE ([0082] "If the UE supports one or more versions of the model other than the common model, then the UE may indicate the supported versions"), the one or more indications of ML model support information indicating at least one of;
one or more model indicators ([0082] "If the UE supports one or more versions of the model other than the common model, then the UE may indicate the supported versions"); and
one or more model version indicators ([0082] "If the UE supports one or more versions of the model other than the common model, then the UE may indicate the supported versions"); and
obtaining a ML model identifier from the network node that identifies one of the one or more ML models available at the UE ([0083] "In response to the indication from the UE, the network may configure the UE to apply one of the versions of the ML processing model supported by the UE. That is, the network may configure the ML processing model for the UE and indicate to the UE which ML processing model should be used for the application").
Regarding claim 2, Ren teaches the method of claim 1, further comprising performing the identified ML model ([0078] "The network may configure the UE to apply the operator specific model").
Regarding claim 3, Ren teaches the method of claim 1, wherein the ML model identifier comprises a short model identifier, the short model identifier configured to be at least one of: unique for the UE for an identified ML model for at least a current session; a number of bits determined by the maximum number of concurrent configured models for the UE; comprised of additional fields including at least one of a model type code and a version number ([0087] "operator specific version of the ML processing model that is supported by the UE, a version identifier for the ML processing model configured for the UE specific machine learning model that is configured for the UE".).
Regarding claim 7, Ren teaches the method of claim 1, wherein obtaining the ML model identifier comprises receiving a version configuration based on the one or more model version indicators ([0087] "operator specific version of the ML processing model that is supported by the UE, a version identifier for the ML processing model configured for the UE specific machine learning model that is configured for the UE".).
Regarding claim 8, Ren teaches the method of claim 1, further comprising performing ML model handling-related signaling to and from the network using the ML model identifier to refer to the ML model available in the UE ([0087] "operator specific version of the ML processing model that is supported by the UE, a version identifier for the ML processing model configured for the UE specific machine learning model that is configured for the UE".).
Regarding claim 11, Ren teaches the method of claim 1, wherein obtaining the ML model identifier from the network comprises receiving a version configuration based on the one or more version indicators ([0034] The UE may indicate the supported model (s) , a version of the model (s) , and/or a public land mobile network (PLMN) identity (ID) to the network.).
Regarding claim 12, Ren teaches the method of claim 1, wherein obtaining the ML model identifier from the network comprises at least one of: obtaining while the UE is in Radio Resource Control Connected state, RRC_CONNECTED; obtaining when the UE enters RRC Idle state, RRC_IDLE; storing a short ML model identifier and its associated mapping when the UE enters RRC Inactive state, RRC_INACTIVE; restoring a short ML model identifier and its associated mapping when the UE initiates a RRC Resume procedure; receiving a mapping in an RRC message ([0079] The UE may provide the capability signaling to the UE. As an example, the UE may provide the capability signaling in RRC signaling to the network. In other examples, the UE may indicate the capability to the network in other signaling. The UE may report its capability to the network, and the network may configure the model for the UE. The RRC signaling may include multiple information elements (IEs) . The IEs may be combined, in a single RRC message, or provided as separate IEs. In some examples, the information may be provided in a single IE, e.g., that combines the information about support for different models.).
Regarding claim 13, Ren teaches the method of claim 12, wherein the RRC message comprises one of: RRC Reconfiguration; RRC Resume; RRC Release ([0056] FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. In the DL, IP packets from the EPC 160 may be provided to a controller/processor 375. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor 375 provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIBs) , RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release)).
Regarding claim 14, Ren teaches the method of claim 1, wherein obtaining the ML model identifier from the network comprises; receiving a first Radio Resource Control (RRC) message indicating the addition of an ML model which has a first model identifier, wherein the message also indicates the assignment of a second model identifier comprising a mapping of the first model identifier; and receiving a second RRC message indicating the modification of the mapping or re-configuration of the ML model of the first model identifier, upon which the UE identifies the model of the first model identifier, by reception of the second model identifier ([0071] First. FIG. 6A illustrates a first application 600 of ML processing models for intermediate extracted features, such as for handover prediction and/or determination, including two ML processing models. The two ML processing models may include a first model (model-1) 602 on the UE side and a second model (model-2) 604 on the network side. The output of model-1 is the extracted feature for the model-2 to make the handover decision. That is, the first model 602 may receive sensor data as input and feedbacks the extracted features based on the received sensor data to the second model 604. Based on the extracted feature, the second model can determine the final determination, such as the handover decision, based on the intermediately extracted features. The first model 602 may be based on the network configuration. That is, the first model 602 may be configured by the network side.).
Regarding claim 15, Ren teaches the method of claim 1, wherein obtaining the ML model identifier from the network comprises; receiving a first Radio Resource Control, RRC, message indicating the addition of an ML model which has a first model identifier, wherein the message also indicates the assignment of a second model identifier comprising a mapping of the first model identifier; and receiving a second RRC message indicating the release of the mapping, upon which the UE releases the association between the first model identifier and the second model identifier ([0056] figure 3: RRC connection setup and release).
Claim 28 is similar to claim 1, therefore is rejected under the same rationale.
Regarding claim 29, Ren teaches the method of claim 28, wherein signaling the ML model identifier comprises selecting a preferred ML model version based on the one or more version indicators and configuring the UE to use the preferred version ([0034] If the UE supports one or more versions of a network specific machine learning model, the network may select from the supported versions in order to configure the UE to apply a network specific machine learning model.).
Claim 30 is similar to claim 8, therefore is rejected under the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 9, 10, 31 and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ren et al. (WO 2022077202, hereinafter referred to as “Ren”) in view of Patil et al., (US 12541705, hereinafter referred to as “Patil”).
Regarding claim 9, Ren does not explicitly teach the method of claim 1, wherein the ML model identifier is shorter than at least one of: the one or more model indicators; and the one or more model version indicators. Patil teaches wherein the ML model identifier is shorter than at least one of: the one or more model indicators; and the one or more model version indicators (col. 8, line 59, to col. 9, line 5 - Model identifier 312 may represent a unique identifier associated with a particular instance of trained machine learning model 210. Model identifier 312 may enable a system and/or a user of a system to retrieve trained machine learning model 210 from model database 134. In some embodiments, model identifier 312 is represented as a string of characters (e.g., ABCD 1234). In some embodiments, model identifier 312 may indicate a type of machine learning model that trained machine learning model 210 corresponds. For example, model identifier 312 may indicate that trained machine learning model 210 corresponds to a CNN, a RNN, a GBM, or other. In some embodiments, model identifier 312 may enable system 100 to identify a data feed with which to obtain production data from.). Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated enable the ML model identifier to be shorter than the model or version indicator because the shorter the identifier, the quicker it can retrieve information.
Regarding claim 10, Ren does not teach the method of claim 1, wherein the ML model identifier comprises a numerical value or a character string. Patil teaches wherein the ML model identifier comprises a numerical value or a character string (col. 8, line 59, to col. 9, line 5 - Model identifier 312 may represent a unique identifier associated with a particular instance of trained machine learning model 210. Model identifier 312 may enable a system and/or a user of a system to retrieve trained machine learning model 210 from model database 134. In some embodiments, model identifier 312 is represented as a string of characters (e.g., ABCD 1234). In some embodiments, model identifier 312 may indicate a type of machine learning model that trained machine learning model 210 corresponds. For example, model identifier 312 may indicate that trained machine learning model 210 corresponds to a CNN, a RNN, a GBM, or other. In some embodiments, model identifier 312 may enable system 100 to identify a data feed with which to obtain production data from.). Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to employ a numerical value or string in the model identifier because this an easy and simple way of identifying the models.
Regarding claim 31, Ren does not explicitly teach the method of claim 28, wherein the ML model identifier is unique among a first set of ML model identifiers signaled to the UE at least for a current connection to the UE. Patil teaches wherein the ML model identifier is unique among a first set of ML model identifiers signaled to the UE at least for a current connection to the UE (col. 8, line 59, to col. 9, line 5 - Model identifier 312 may represent a unique identifier associated with a particular instance of trained machine learning model 210). Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to enable unique ML model identifier so that they are not confused, thus reducing risk of errors.
Claim 32 is similar to claim 9, therefore is rejected under the same rationale.
Allowable Subject Matter
Claims 4-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 4, the prior art of record does not teach the method of claim 1, wherein the ML model identifier comprises a long model identifier, the long model identifier configured to be unique for the same model type or model version over multiple sessions and for multiple UEs and allows consistent model identifier allocation over time and across the network.
Claims 5 and 6 depend on objected claim 4, therefore are also objected for their dependencies.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Soldati et al., US 20230276264 - receiving, from a second RAN node in the communication network, information indicating whether a wireless device is capable of executing a Machine Learning (ML) model that is operable to provide an output on the basis of which at least one RAN operation performed by the wireless device may be configured.
TOMALA et al., US 20220279341 - resource control (RRC) procedures for machine learning (ML).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALINA N BOUTAH whose telephone number is (571)272-3908. The examiner can normally be reached M-F 7:00 AM - 3:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at (571) 270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ALINA BOUTAH
Primary Examiner
Art Unit 2458
/ALINA A BOUTAH/Primary Examiner, Art Unit 2458