DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
According to paper filed December 29th 2025, claims 1-20 are pending for examination with a January 16th 2020 priority date under 35 USC §111(a) and 35 USC §119(a)-(d) or (f).
By way of the present Amendment, claims 5, 7, 13, and 15 are amended. No claim is canceled and claims 17-20 are newly added. Claim rejections under 35 USC 112(b) are withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §102 and §103 (or as subject to pre-AIA 35 U.S.C. §102 and §103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. §103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 9-15, and 17-20 are rejected under 35 U.S.C. §103 as being unpatentable over Zhou et al. (WO 2018/227823), hereinafter Zhou, and further in view of Liu et al. (US 2021/0247426), hereinafter Liu, and Polleri et al. (US 2021/0081819), hereinafter Polleri.
Claim 1
“a first server and a second server, wherein the first server is located in a private cloud and is used for model inference, and the second server is located in a public cloud and is used for model training” Liu [0159] teaches servers implemented on a cloud platform and the cloud platform includes private cloud and public cloud, and
Zhou p.14 l.31–p.15 l.3 teaches a portrait management model that functions like claimed “model inference”, wherein, after receiving update instruction, the user portrait management model updates the current user portrait model according to algorithms; the user portrait management model may invoke one or more algorithms to train the user portrait model according to the service instance;
“obtaining, by the first server, a first training model from the second server; inputting, by the first server, input data
into the first training model for model inference to obtain an inference to obtain an inference result” Liu [0159] teaches servers implemented on a cloud platform and the cloud platform includes private cloud and public cloud, wherein the server (i.e., a first server) obtains a first training model located on a public cloud platform (i.e., the second server); and
Zhou p.14 l.28-30 teaches the user portrait management module periodically perform model update, when the user terminal satisfies certain conditions, the user triggers an action and manages the image to the user;
“evaluating, by the first server, the first training model based on the inference result and a model evaluation metric to obtain an evaluation result of the model evaluation metric” Polleri [0073] teaches a training metrics help evaluate the performance of a trained model, the training metrics can include classification accuracy, logarithmic loss, area under curve, F1 Score, mean absolutes error, and mean squared error;
“if an evaluation result of at least one model evaluation metric is less than or equal to a preset threshold corresponding to the model evaluation metric, sending, by the first server, a retraining instruction for the first training model to the second server, wherein the retraining instruction instructs the second server to retrain the first training model” Polleri [0054] teaches a monitoring engine that receive the results of the model execution engine and compare the results with the performance characteristics (e.g., KPI/QoS metrics), the monitoring engine can also include adjustments, i.e., feedback, to one or more variables or selected machine learning model used in the machine learning model, Polleri [0084][0100] teaches feedback from the monitoring engine can be sent to the model composition engine to provide recommendations to revise, i.e., retrain, the machine learning model, within an expected range; and Polleri [0095] further teaches monitoring values for QoS/KPI to validate model, the machine learning platform can inform the user of the monitored values and alert the user if the QoS/KPI metrics fall outside prescribed thresholds; the claimed “less than or equal to a preset threshold” is inherently taught in the disclosure of “within an” and “outside” prescribed thresholds. Zhou p.8 l.3-7 teaches communications between servers, sending training instruction; the “retraining” instruction is construed as train “again” instruction.
Zhou, Liu, and Polleri disclose analogous art. Liu is analogous because it is in the field of resources management involving plurality of processors and storage devices. Polleri is analogous because it is in the field of chatbot for defining a machine learning solution. Zhou does not spell out the “public cloud and private cloud” and “evaluation metrics” as recited above. Said features are taught in Liu and Polleri respectively. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said feature of Liu (Liu [0159]: public cloud and private cloud) and Polleri (Polleri [0073]: training metrics can include classification accuracy, logarithmic loss, area under curve, F1 Score, mean absolute error, and mean squared error) into Zhou to enhance its model training functions among private and public clouds, and its model performance evaluation functions with evaluation metrics.
Claim 2
“sending, by the first server, the input data and the inference result to the second server, wherein the input data
and the inference result are used to retrain the first training model” Liu [0018] teaches a preset target device configured to execute the first operation and feedback the result of the first operation to the wireless communicating module, which then configured to a third data packet of the first operation result and sent it to the data routing module and further updating the device operating state and feedback to the server.
Claim 3
“wherein the model evaluation metric comprises at least one of the following: accuracy of the inference result; precision of the inference result; recall of the inference result; F1-score of the inference result; or an area under a receiver operating characteristic (ROC) curve (AUC) of the inference result” Polleri [0073] teaches a training metrics help evaluate the performance of a trained model, the training metrics can include classification accuracy, logarithmic loss, area under curve, F1 Score, mean absolute error, and mean squared error.
Claim 4
“if all evaluation results of model evaluation metrics exceed preset threshold corresponding to the model
evaluation metrics, skipping sending, by the first server, a retraining instruction for the first training model to the second server” Polleri [0095][0096] teaches a machine learning application from a machine learning library infrastructure, wherein the functionality includes monitoring values on ongoing basis for QoS/KPI to validate model, and a machine learning platform can inform the user of the monitored values, and alert (i.e., no retraining) the user if the QoS/KPI metrics fall outside prescribed thresholds.
Claim 5
“determining, by the second server, a retraining data sample set based on the input data and the inference result” Liu [0159] teaches servers implemented on a cloud platform and the cloud platform includes private cloud and public cloud, Zhou p.14 l.28-30 teaches the user portrait management module periodically perform model update, and Polleri [0061] teaches the model execution engine uses hosted input data including a portion of the data stored at the data storage, a portion of the hosted data can be identified as testing data (i.e., retraining data sample set);
“retraining, by the second server, the first training model based on the retraining sample set to determine a second training model, wherein the second training model is used to replace the first training model” Polleri [0373] teaches a threshold of improvement specified before replacement occurs, new versions of the pipeline may be tested by machine learning models before replacement occurs and the new pipeline may run in shadow mode for a period of time before replacement;
“sending, by the second server, the second training model to the first server” Polleri [0413] teaches organization may provide services for one or more entities within the organization under a private cloud model. Claim 5 is also rejected for the rationale given for claim 1.
Claim 6
“obtaining, by the second server, the input data and the inference result in response to the retraining instruction
received from the first server” Liu [0159] teaches servers implemented on a cloud platform and the cloud platform includes private cloud and public cloud, wherein the server (i.e., a first server) obtains a first training model located on a public cloud platform (i.e., the second server); and
Zhou p.14 l.28-30 teaches the user portrait management module periodically perform model update, when the user terminal satisfies certain conditions, the user triggers an action and manages the image to the user.
Claim 7
“annotating, by the second server, the input data to obtain the annotated input data, and storing, by the second server, the annotated input data and the inference result in the retraining data sample set” Polleri [0342]
teaches annotating services with concepts that, from a machine learning perspective, intelligent agents
and a reasoner engine can determine formal service semantics.
Claims 9-12
Claims 9-12 are rejected for the similar rationale give for claims 1-4 respectively.
Claims 13-15
Claims 13-15 are rejected for the similar rationale given for claims 5-7 respectively.
Claim 17
“in response to sending the retraining instruction to the second server, receiving, by the first server, a second training model from the second server” Zhou p.8 l.3-7 teaches user portrait platform 20 can communicate with the user portrait server 30 on the cloud side, the platform 20 can download the original user portrait model from server 30, and dynamically updates the model, the user portrait server 30 can train the updated user portrait model based on the user portrait model updated by the plurality of user portrait platforms 20 of the plurality of terminals.
Claim 18
“replacing, by the first sever, the first training model with the second training model” Zhou p.8 l.3-7 teaches downloading and uploading the user portrait model for training, and the update operation is a replacing operation as claimed.
Claim 19
“evaluating the first training model is performed periodically by the first server” Zhou p.8 l.3-7 teaches download the original user portrait model from server 30, and “dynamically updates” (i.e., periodically performed) the model.
Claim 20
“wherein the model evaluation metric is set by a user based on an application scenario” Polleri [0053] teaches one or more metrics can be used for evaluating the machine learning application, the metrics can be received from a user through a user interface.
Allowable Subject Matter
Claims 8 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments filed December 29th 2025 have been fully considered but they are not persuasive.
Applicant argues that “if an evaluation result of at least one model evaluation metric is less than or equal to a preset threshold corresponding to the model evaluation metric, sending, by the first server, a retraining instruction for the first training model to the second server, wherein the retraining instruction instructs the second server to retrain the first training model”. Said argument is not persuasive.
The “sending” from a first server to a second server is construed as “communications” between servers, and a “retraining” instruction is construed as training “again”. Accordingly, the argued feature is taught in Zhou page 8 lines 3-7 wherein the user portrait platform 20 can communicate with the user portrait server 30 on the cloud side, and the platform 20 can download the original user portrait model from server 30 and dynamically updates the model; the user portrait server 30 can train the updated user portrait model based on the user portrait model updated by the plurality of user portrait platforms 20 of the plurality of terminals.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUAY HO whose telephone number is (571)272-6088. The examiner can normally be reached Monday to Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. David Yi can be reached at 571-270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 or 571-272-1000.
/Ruay Ho/Patent Examiner, Art Unit 2126