Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The instant application having Application No. 18/956,281 is presented for examination by the examiner.
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been received.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6, 8, 10-12, 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over USP Application Publication 2020/0311617 to Swan et al., hereinafter Swan in view of NPL AWS Documentation published by Amazon Web Services in 2017 and captured by the WayBack Machine between Oct. 13th-25th, 2017, hereinafter Amazon.
As per claims 1, Swan teaches a model training system for providing cloud service, comprising a cloud data storage platform, a cloud model training platform, a cloud model storage platform, and a model inference platform (Figs 1 and 4 and 0024), wherein:
the cloud data storage platform comprises at least one first processor and a first storage, wherein the at least one first processor is configured to receive training data uploaded to the cloud data storage platform by a data provider (0036 and 0069), and the first storage is configured to store the training data (0036 and 0069);
the cloud model storage platform comprises a second storage and the second storage is configured to store a to-be-trained model, wherein the to-be-trained model is uploaded to the cloud model storage platform by a model provider or a user (0029);
the cloud model training platform comprises at least one second processor configured to receive a model training creation instruction from a terminal of the user (0026 and 0027), obtain the to-be-trained model from the cloud model storage platform (0029), call the training data stored in the cloud data storage platform (0078), train the to-be-trained model by using the training data to obtain a training result model, and after obtaining the training result mode (0078 and 0080)l, send the training result model to the cloud model storage platform (0080 and 0074);
the model inference platform [ML evaluator 128] comprises at least one third processor configured to call the training result model from the cloud model storage platform, and import to-be-processed data [evaluation data] into the training result model for model inference (0081).
Swan is silent in explicitly teaching a label of the training data provided by the data provider, wherein the label of the training data indicates content of the training data and calling the training data when the user has successfully paid for the training data. On the other hand, Amazon teaches a machine learning platform on AWS that supplies labeled training data that indicates content of the training data (section Collecting Labeled Data) and calling/training the training data when the user has successfully paid for the training data (section Pricing for Amazon ML). As a service, customers can purchase time to train their models using labeled training data. Amazon explains that labeled data is one of the most important steps in solving a ML problem. Training models requires resources and Amazon explains how a user only pays for what they use on an hourly rate as a way to sustain the model of pay as you go machine learning. The claim is obvious because one of ordinary skill in the art can combine methods known before the effective filing date which produce predictable results. Using labeled training data and paying for the training yields a predictable result when combined with Swan.
As per claim 11 it is rejected for the same reasons as claim 1, having the same operation steps.
As per claims 2, and 12 Swan teaches an authentication center, the authentication center comprises at least one fourth processor circuit and the at least one fourth processor is configured to receive an authentication permission request input by the user, and the authentication permission request is used to determine authority of the training data (0067 and 0068).
As per claims 6 and 16, Swan teaches the training data is configured with a data route, and wherein the cloud model training platform is further configured to call the training data according to the data route [receives the data specified by address given by user for training data stored in 160; 0026 and 0077].
As per claims 8 and 18, Swan teaches comprising image platform [container data store; 170];
the image platform comprises at least one sixth processor and the at least one sixth processors is configured to store a model inference runtime environment, and the model inference runtime environment includes a runtime framework environment corresponding to the training result model (0031); and
the at least one third processor of the model inference platform is further configured to load the model inference runtime environment from the image platform (0037), and import to-be-processed data into the training result model for model inference in the model inference runtime environment (0059 and 0060).
As per claims 10 and 20, the combined system of Swan and Amazon teaches the training data is approved to be called for the user during an effective duration [charged per hour] after the training data is successfully paid by the user [Amazon Pricing for Amazon ML] .
Claim(s) 3-5 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Swan and Amazon as applied to claims 1 and 11 above, and further in view of USP 10,810,169 to Chung et al., hereinafter Chung.
As per claims 3 and 13, Swan teaches a data retrieval platform, wherein the data retrieval platform is configured to acquire information of the training data (0077). Swan teaches the training data is addressable by a location/identifier and is stored in storage 160. Swan and Amazon do not explicitly teach the information of the training data comprises at least one of data owner information of the training data and data upload date of the training data. Chung teaches a model training system in which the training data is stored with information including the data owner information of the training data (col. 9, line 60 – col. 10, line 10). Swan already teaches authorizing request to train models using the training data (0067). The owner of the training if known could obviously be used to make authorizations. Including who owns the training data would predictably make authorizing requests straight forward. The claim is obvious because one of ordinary skill in the art can combine known methods before the effective filing date which produce predictable results.
As per claims 4 and 14, Swan teaches wherein the data retrieval platform is further configured to: establish a data index table based on at least one of the labels of the training data and the information of the training data [training data is addressable by location identifier; 0026, 0029, and 0082];
receive a retrieval instruction input by the user, wherein the retrieval instruction comprises a retrieval keyword [location of modified container image; 0084 and 0095];
perform data retrieval in the data index table according to the retrieval instruction [retrieves and forwards modified container image to VM; 0084]; and
generate a retrieval result, wherein the retrieval result comprises the label of the training data corresponding to the keyword or the information of the training data corresponding to the keyword [retrains; 0084, generating updated model data; 0085; which is then stored and presented to user as results; 0082].
As per claims 5 and 15, Swan teaches send the retrieval result to a user terminal to display the retrieval result for the user (0082 and 0083); and
receive a data selection instruction for the retrieval result sent by the user terminal, wherein the data selection instruction is used to instruct the data retrieval platform to determine the training data from the retrieval result [retraining can commence again based on performance of previous training result; 0084].
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Swan, and Amazon as applied to claims 1 and 11, and further in view of USP Application Publication 2021/0150405 to Kitano.
As per claims 7 and 17, Swan and Amazon are silent in explicitly teaching the data route comprises a uniform resource locator (URL) path of the training data. Swan does however teach the network which various cloud systems supports HTTP messages (0025). Kitano teaches information used to retrieve training data can be a URL (0033 and 0036). Swan already uses location identifiers and a HTTP capable network to retrieve the training data. Using a URL as the location identifier produces a predictable result. The claim is obvious because one of ordinary skill in the art can combine known methods before the effective filing date which do produce predictable results.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Swan and Amazon as applied to claims 1 and 11, and further in view of NPL entitled “Alternative Keys Fine Grained, REST API Access to Your Machine Learning Resources” published May 3, 2013, hereinafter BigML.
As per claims 9 and 19, Swan and Amazon are silent in explicitly teaching the at least one fourth processor of the authentication center is further configured to create a data token when the payment information indicates that the user has successfully paid for the training data; the at least one third processor of the model inference platform is further configured to retrieve the training data stored in the cloud data storage platform corresponding to the data token. BigML teaches that users who access the ML service have an API key that links all of the commands a user can perform on the ML platform (pgs. 1-2). BigML also mentions the key is used to create and retrieve models and evaluations from the ML account (pg. 1). This API key is a data token created for the user to access the account. Thus, it was known to use a data token to create models and retrieve data evaluations. The combination of Swan and Amazon already teaches a user to the service that creates an account. Using the API key of BigML would unlock the features including paid model creation as taught by Amazon. The claim is obvious because one of ordinary skill in the art can combine methods known before the effective filing date which produce predictable results. The data token is merely a way to present a credential that authorizations command within the platform. It is obvious to link the paid services to the token to gate the paid features of a ML service.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed on the enclosed PTO-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R. VAUGHAN whose telephone number is (571)270-7316. The examiner can normally be reached on Monday - Friday, 9:30am - 5:30pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached on (571) 272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL R VAUGHAN/
Primary Examiner, Art Unit 2431