Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1, 2, 4-10,13, 14, 19, 20, 28-30, 37, 38, 45 and 46 are pending per amendment.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4-10,13, 14,19, 20, 28-30, 37, 38, 45 and 46 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claimed “first” though “seventh” functions are software per se.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 4-10, 13, 14, 19, 20, 37, 38, 45 and 46 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Applicant disclosed Wu (US 20210184989).
For claim 1, Wu discloses:
A system for supporting artificial intelligence/machine learning (AI/ML) model functions using a service-based architecture in a radio access network (RAN) intelligent controller (RIC) (par. 0077: RIC designed to support AI/ML applications), the system comprising:
a first function for managing AI/ML functions, and for exposing management and exposure services for the AI/ML functions (par. 0061: RAN service-based architecture with management plane functions; par. 0096: RIC Or22 access feedback on model performance. RIC may scale ML model instances as needed by observing resource utilization);
a second function for providing services for deploying the AI/ML models in the at least one RIC (par. 0095: “The non-RT RIC may operate one or more ML engines, which are packaged software executable libraries that provide methods, routines, data types, etc., used to run ML, models.”); and
a repository for storing the AI/ML models (par. 0095: Multiple ML catalogs made discoverable by the non-RT RIC: a design-time catalog, a training/deployment-time catalog, a run-time catalog.),
wherein the first function, the second function, and the repository are connected with the service-based architecture (par. 0061 & Fig. 3: Service-based architecture for managing network functions: control plane, user plane, management plane, data plane, compute plane, network exposure functions, application functions).
For claim 2, Wu discloses:
The system according to claim 1, further comprising at least one of: a third function for providing services for training the AI/ML models; a fourth function for providing services for certifying the AI/ML models; a fifth function for providing services for registering the AI/ML models; a sixth function for providing services for performing AI/ML model inference using an AI/ML model; and/or a seventh function for providing data management services for training the AI/ML models (par. 0093: AI/ML workflows include model training, inferences, updates).
For claim 4, Wu discloses:
A method of providing AI/ML services to a service consumer using the system of claim 1, the method comprising: receiving, from a service consumer, a message for requesting the first function to perform at least one of: model training; certification; registration; and/or deployment for an AI/ML model; and initiating, by the first function, a procedure to perform the at least one of: model training; certification; registration; and/or deployment for the AI/ML model (par. 0111: “The SMO internal interface termination may receive ML model deployment from SMO and route the trained ML model to the responding rApps”).
For claim 5, Wu discloses:
The method according to claim 4, wherein the message, is for requesting to perform the model training and includes: a parameter indicating an AI/ML identity (ID) for identifying the AI/ML model; a parameter indicating an application type; a parameter indicating an application identity for identifying an application; a parameter indicating a destination that hosts a target application; a parameter indicating that the AI/ML model is a new AI/ML model; a parameter indicating an existing AI/ML model identity (ID) in a case where there is an existing AI/ML model; a parameter indicating a version number for indicating a version of the AI/ML model; and a list of input parameters for model training for indicating input data for training the AI/ML model; a list of output parameters for model training for indicating output data for AI/ML model training for AI/ML model training; and/or at least one parameter indicating a performance criteria for model training for use in measuring a performance of the model training (par. 0100, 0103: ML host generates output produced from training data).
For claim 6, Wu discloses:
The method according to claim 5, wherein the parameter indicating the application type indicates the application type to be a non-real-time RIC (Non-RT RIC) application (rApp) or a near-real-time RIC (Near-RT RIC) application (xApp) (par. 0106, 0110: RApps and XApps disclosed).
For claim 7, Wu discloses:
The method according to claim 5, wherein the parameter indicating the destination indicates the destination to be a non-real-time RIC (Non-RT RIC) or a near-real-time RIC (Near-RT RIC) (par. 0092, 0093: Non-RT RIC and near-RT RIC disclosed).
For claim 8, Wu discloses:
The method according to claim 5, wherein the input data for training the AI/ML model includes at least one of the following: measurement data from an open radio access network (O-RAN) central unit (OCU), an O-RAN distributed unit (O-DU), and/or an open RAN remote unite (O-RU); analytical data from at least one non-real-time RIC (Non-RT RIC) application (rApp); analytical data from at least one near-real-time RIC (Near-RT RIC) application (xApp); and/or enrichment information (EI) data from at least one external source (par. 0094, 0111: ML training data sources disclosed).
For claim 9, Wu discloses:
The method according to claim 5, wherein the output data for AI/ML model training includes at least one of the following: analytical data from at least one non-real-time RIC (Non-RT RIC) application (rApp); analytical data from at least one near-real-time RIC (Near-RT RIC) application (xApp); and/or data indicating an accuracy of model training (par. 0103, RApps store ML output data).
For claim 10, Wu discloses:
The method according to claim 5, wherein the performance criteria for model training includes at least one of the following: an accuracy threshold for indicating whether or not a target accuracy for model training has been successfully achieved; and/or an execution time for the trained AI/ML model (par. 0096: ML model prediction accuracy feedback disclosed).
For claim 13, Wu discloses:
The method according to claim 4, wherein the message is for requesting at least AI/ML model deployment and includes: a parameter indicating an AI/ML identity (ID) for identifying the AI/ML model; a parameter indicating an application type; a parameter indicating an application identity for identifying an application; a parameter indicating a destination that hosts a target application; a parameter indicating that the AI/ML model is a new AI/ML model; a parameter indicating an existing AI/ML model identity (ID) in a case where there is an existing AI/ML model; a parameter indicating a version number for indicating a version of the AI/ML model; and/or at least one deployment parameter for use in model deployment (par. 0095: ML model catalogs disclosed; further discovery mechanism if a particular ML model can be executed in a target ML inference host, and what number and type of ML models can be executed in the ML interface host (i.e., deployment parameters)).
For claim 14, Wu discloses:
The method according to claim 13, wherein the at least one deployment parameter includes at least one of the following: at least one parameter indicating at least one deployment option; the parameter indicating the application identity for identifying the application; a parameter indicating the destination that hosts the target application; a parameter indicating an application type; a parameter indicating a target application identity (ID); at least one parameter indicating required resources related to each deployment option; at least one configuration parameter; at least one parameter indicating a runtime environment; and/or at least one parameter indicating a version number (par. 0095:Discovery mechanism parameters include: if a particular ML model can be executed in a target ML inference host, and what number and type of ML models can be executed in the ML interface host).
For claim 19, Wu discloses:
A method of training an AI/ML model in a radio access network (RAN) intelligent controller (RIC), using the system of claim 2 in a case where the system includes the third function and the seventh at least one-function, the method comprising: the first function instructing the third function to train the AI/ML model (par. 0061: RAN service-based architecture with management plane functions; par. 0096: RIC Or22 access feedback on model performance. RIC may scale ML model instances as needed by observing resource utilization); the third function requesting the seventh function to provide data to train the AI/ML model (par. 0095: Multiple ML catalogs made discoverable by the non-RT RIC: a design-time catalog, a training/deployment-time catalog, a run-time catalog.); the third function receiving, from the seventh function, the data to train the AI/ML model (par. 0094: “…the non-RT RIC Or212 may request or trigger ML model training in the training hosts regardless of where the model is deployed and executed. ML models may be trained and not currently deployed.”); and the third function performing model training for the AI/ML model based on the data, storing the trained AI/ML model at the -repository, and informing the first function (par. 0095: “…the non-RT RIC Or212 may provide a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components)”).
For claim 20, Wu discloses:
The method according to claim 19, wherein the first function instructs the third function to train the AI/ML model using a message that includes at least one of: a parameter indicating an AI/ML identity (ID) for identifying the AI/ML model; a parameter indicating an application type; a parameter indicating an application identity for identifying an application; a parameter indicating a destination that hosts a target application; a parameter indicating that the AI/ML model is a new AI/ML model; a parameter indicating an existing AI/ML model identity (ID) in a case where there is an existing AI/ML model; a parameter indicating a version number for indicating a version of the AI/ML model; and a list of input parameters for model training for indicating input data for training the AI/ML model; a list of output parameters for model training for indicating output data for AI/ML model training; and/or at least one parameter indicating a performance criteria for model training for use in measuring a performance of the model training (par. 0094: “…the non-RT RIC Or212 may request or trigger ML model training in the training hosts regardless of where the model is deployed and executed. ML models may be trained and not currently deployed.”).
For claim 37, Wu discloses:
A method of registering an AI/ML model in a radio access network (RAN) intelligent controller (RIC), using the system of claim 2 in a case where the system includes the fifth function, the method comprising: the first function instructing the fifth function to register a trained AI/ML model; and the fifth function registering the trained AI/ML model for discovery by a service consumer (par. 0095: query-able catalog for ML models disclosed).
For claim 38, Wu discloses:
The method according to claim 37, the first function instructs the fifth function to register the trained AI/ML model using a message including at least one of: a parameter indicating an AI/ML identity (ID) for identifying the AI/ML model; a parameter indicating an application type; a parameter indicating an application identity for identifying an application; a parameter indicating a destination that hosts a target application; a parameter indicating that the AI/ML model is a new AI/ML model; a parameter indicating an existing AI/ML model identity (ID) in a case where there is an existing AI/ML model; a parameter indicating a version number for indicating a version of the AI/ML model; and a list of input parameters for model training for indicating input data for training the AI/ML model; a list of output parameters for model training for indicating output data for AI/ML model training; and/or at least one parameter indicating a performance criteria for model training for use in measuring a performance of the model training (par. 0095:Discovery mechanism parameters include: if a particular ML model can be executed in a target ML inference host, and what number and type of ML models can be executed in the ML interface host).
For claim 45, Wu discloses:
A method of deploying an AI/ML model in a radio access network (RAN) intelligent controller (RIC), using the system of claim 1, the method comprising: the first function instructing the second function to deploy an AI/ML model; and the second function instructing a network function orchestrator to deploy the AI/ML model, whereby the network function orchestrator deploys the AI/ML model on an open-cloud (par. 0096: RIC Or22 access feedback on model performance. RIC may scale ML model instances as needed by observing resource utilization).
For claim 46, Wu discloses:
The method according to claim 45, the first function instructs the second function to deploy the AI/ML model using a message including at least one of the following: a parameter indicating an AI/ML identity (ID) for identifying the AI/ML model; a parameter indicating an application type; a parameter indicating an application identity for identifying an application; a parameter indicating a destination that hosts a target application; a parameter indicating that the AI/ML model is a new AI/ML model; a parameter indicating an existing AI/ML model identity (ID) in a case where there is an existing AI/ML model; a parameter indicating a version number for indicating a version of the AI/ML model; and/or at least one deployment parameter for use in model deployment (par. 0096: RIC Or22 access feedback on model performance. RIC may scale ML model instances as needed by observing resource utilization; par. 0095: ML model catalogs disclosed; further discovery mechanism if a particular ML model can be executed in a target ML inference host, and what number and type of ML models can be executed in the ML interface host (i.e., deployment parameters)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 28, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20210184989), in view of Ren (US 20240040404).
For claim 28, Wu discloses:
The method according to claim 19, but fails to explicitly disclose: “wherein the third function performs evaluation and validation of the trained AI/ML model prior to storing the trained AI/ML model at the repository.”
However, in a related field, Ren disclosed methods of a machine learning preparation procedure that including training, validation and testing stages (par. 0121). It would have been obvious to one of ordinary skill before effective filing of claimed invention to have introduced Ren’s teachings alongside Wu. The motivation to combine would have been to ensure the machine learning model is performing as expected based on the training data collection stage (Ren, par. 0121).
For claim 29, Wu-Ren discloses:
A method of certifying an AI/ML model in a radio access network (RAN) intelligent controller (RIC), using the system of claim 2 in a case where the system includes the fourth function, the method comprising: the first function instructing the fourth function to verify and certify a trained AI/ML model stored at the repository; the fourth function verifying and certifying the trained AI/ML model stored at the repository and labelling the trained AI/ML model as a certified model (Ren, par. 0121: ML training set validation following data training disclosed).
Moreover, to extent Wu-Ren does not explicitly disclose labeling or indicating through parameter a model is “certified”, it would have been obvious to one of ordinary skill, apprised of the relevant art, to have done as much. The motivation would have been to utilize well-known “flagging” or other electronic indicators/metadata to achieve a well-known outcome of recording state/status of data.
For claim 30, Wu-Ren discloses:
The method according to claim 29, wherein the first function instructs the fourth function to verify and certify the trained AI/ML model using a message including at least one of: a parameter indicating an AI/ML identity (ID) for identifying the AI/ML model; a parameter indicating an application type; a parameter indicating an application identity for identifying an application; a parameter indicating a destination that hosts a target application; a parameter indicating that the AI/ML model is a new AI/ML model; a parameter indicating an existing AI/ML model identity (ID) in a case where there is an existing AI/ML model; a parameter indicating a version number for indicating a version of the AI/ML model; and/or at least one certification parameter for use in model certification (Ren, par. 0121: ML training set validation following data training disclosed).
Moreover, to extent Wu-Ren does not explicitly disclose labeling or indicating through parameter a model is “certified”, it would have been obvious to one of ordinary skill, apprised of the relevant art, to have done as much. The motivation would have been to utilize well-known “flagging” or other electronic indicators/metadata to achieve a well-known outcome of recording state/status of data.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLAYTON R WILLIAMS whose telephone number is (571)270-3801. The examiner can normally be reached M-F 10:00am - 6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas Taylor can be reached at 571-272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CLAYTON R WILLIAMS/Primary Examiner, Art Unit 2443