To DETAILED ACTION
This correspondence is responsive to the application filed on December 9, 2022. Claims 1-20 are pending in the case, with claims 1, 10 and 19 in independent form.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Summary of Detailed Action
Drawings are objected to regarding informalities.
Claims 9, 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claims 1-5, 8-14, 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 8, 10, 17, 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhu et al.
Claims 2, 11, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu, and further in view of McKay et al.
Claims 3, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu, and further in view of Bright, Julian, Dynamic A/B testing for machine learning models with Amazon SageMaker MLOps projects.
Claims 4, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu, and further in view of Ralhan.
Claims 5, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of Theurer et al.
Claims 6, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu, and further in view of Shrivastava et al.
Claims 7, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of SAS Institute Inc. 2009, SAS® Model Manager 2.2: User’s Guide.
Claims 9, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu, and further in view of Wang et al.
Drawings
The drawings filed on December 9, 2022 are objected to regarding informalities. Figures 2A-2E and 3-15 of the application drawings are objected to regarding the black background and white or greyscale print, which is not legible or suitable for reproduction purposes. Drawing changes must be made by presenting replacement sheets which incorporate the desired changes and which comply with 37 CFR 1.84. An explanation of the changes made must be presented either in the drawing amendments section, or remarks, section of the amendment paper. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). A replacement sheet must include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of the amended drawing(s) must not be labeled as “amended.” If the changes to the drawing figure(s) are not accepted by the examiner, applicant will be notified of any required corrective action in the next Office action. No further drawing submission will be required, unless applicant is notified. Identifying indicia, if provided, should include the title of the invention, inventor’s name, and application number, or docket number (if any) if an application number has not been assigned to the application. If this information is provided, it must be placed on the front of each sheet and within the top margin.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 depends from claim 1 and recites “determine the first model performs better than the second model based on the second model generating output with a same accuracy faster than the first model.” It is confusing and not at all clear how the first model performs better than the second model based on the second model with a same accuracy faster than the first model. It appears that the second model performs better that the first model based on the second model generating output with a same accuracy faster. Claim 18 recites a method that parallels the system of claim 9. For examination purposes, claims 9 and 18 are interpreted as determine the first model performs better than the second model based on the second model generating output. Applicant may cancel claims 9 and 18 or amend claims 9 and 18 to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 8-14, 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) subject matter at a high general level to determine, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric; determine, based on a comparison of a characteristic of the first model with a characteristic of the second model, to skip a validation process for the second model; and establish the second model as the primary model in the deployment to replace the first model in the deployment, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I).. This judicial exception is not integrated into a practical application and the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claims 1-20 recite one of the four statutory categories of patent able subject matter and belong to the statutory class(es) of a process (method claims 10-18), a machine (system/apparatus claims 1-9), and an article of manufacture (non-transitory computer readable media claims 19-20).
Claim 1 recites a system, thus a machine, one of the four statutory categories of patentable subject matter. However, claim 1 further recites to determine, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric; determine, based on a comparison of a characteristic of the first model with a characteristic of the second model, to skip a validation process for the second model; and establish the second model as the primary model in the deployment to replace the first model in the deployment, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I). This judicial exception is not integrated into a practical application and the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 1 does not include additional elements that integrate the abstract idea into a practical application because the additional element consist of:
A system, comprising: one or more processors, coupled to memory (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).)
Thus, the claim is directed to the abstract idea.
Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more and the combination of additional elements does not provide an inventive concept. See also MPEP 2106.05(d)(II), MPEP 2106.05(g).
Thus, the claim is ineligible.
Claim 2, dependent on claim 1, recites only additional mental processes for wherein the characteristic comprises a blueprint, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I). This judicial exception is not integrated into a practical application and the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 3, dependent on claim 1, recites only additional mental processes for wherein the characteristic comprises a hyperparameter, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I). This judicial exception is not integrated into a practical application and the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 4, dependent on claim 1, recites only additional mental processes for wherein the characteristic comprises an order of operations, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I). This judicial exception is not integrated into a practical application and the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 5, dependent on claim 1, recites additional mental processes for determine, based on the first model, one or more performance metrics to use for the comparison of the first model with the second model; provide the determined one or more performance metrics, selection of the at least one performance metric from the one or more performance metrics, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I).
Claim 5 does not include additional elements that integrate the abstract idea into a practical application because the additional element consist of:
for presentation via a prompt output by a graphical user interface rendered on a client device (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).)
receive, responsive to the prompt, provided via the prompt (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).)
Claim 8, dependent on claim 1, recites only additional mental processes for wherein the at least one performance metric comprises at least one of speed of performance, accuracy, or computation resource utilization, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I).
Claim 9, dependent on claim 1, recites only additional mental processes for determine the first model performs better than the second model based on the second model generating output with a same accuracy faster than the first model, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I).
Claim 10, recites a method, thus a process, one of the four statutory categories of patentable subject matter. However, claim 10 further recites determining, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric; determining, based on a comparison of a characteristic of the first model with a characteristic of the second model, to skip a validation process for the second model; and establishing, the second model as the primary model in the deployment to replace the first model in the deployment, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I).
Claim 10 does not include additional elements that integrate the abstract idea into a practical application because the additional element consist of:
by one or more processors coupled to memory (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).)
by the one or more processors (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).)
Thus, the claim is directed to the abstract idea.
Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more and the combination of additional elements does not provide an inventive concept. See also MPEP 2106.05(d)(II), MPEP 2106.05(g).
Thus, the claim is ineligible.
Claim 19 recites A non-transitory computer-readable medium, thus an article of manufacture, one of the four statutory categories of patentable subject matter. However, claim 19 further recites determine, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric. determine, based on a comparison of a characteristic of the first model with a characteristic of the second model, to skip a validation process for the second model; and establish the second model as the primary model in the deployment to replace the first model in the deployment, which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment, or opinion, or by a human with pen and paper. See MPEP 210604(a)(2)(I).
Claim 19 does not include additional elements that integrate the abstract idea into a practical application because the additional element consist of:
A non-transitory computer-readable medium storing processor-executable instructions that, when executed by one or more processors, causes the one or more processors to (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).)
Thus, the claim is directed to the abstract idea.
Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more and the combination of additional elements does not provide an inventive concept. See also MPEP 2106.05(d)(II), MPEP 2106.05(g).
Thus, the claim is ineligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 8, 10, 17 and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhu et al. (Pub. No. US 2107/0193066 A1, published July 6, 2017) hereinafter Zhu.
Regarding claim 1, Zhu teaches:
A system, comprising: one or more processors, coupled to memory (i.e., Zhu, Fig 9, para 173-181.), to:
determine, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric (i.e., Champion/Challenger [0163] In an embodiment, a new computer model that is generated is used in place of the old computer model. In an alternative embodiment, the new computer model is applied against one portion of “live” data and the new computer model is applied against another portion of the live data. (The live data may be considered one of input 158 and is indicated by a user. For example, the live data may be all (or a portion of) members indicated in a member database, which may be part of cluster system 150.) Thus, both computer models are executed in parallel. For example, in the context of predicting user behavior, the new computer model is applied to 5% of users while the old computer model is applied to 95% of the users. [0164] One or more metrics are used to compare which computer model is performing better. An example metric is conversation rate. A conversion rate may be calculated by dividing the number of conversions by volume (or the number of predictions). If the old computer model is associated with a higher conversion rate relative to the conversion rate associated with the new computer model, then the new computer model is dropped or ceases to be used. If the new computer model results in a higher conversion rate relative to the conversion rate of the old computer model, then the new computer model replaces the old computer model (determine, based on a comparison of a first (old) model that is deployed as a primary model with a second (new) model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance (conversion rate) metric). Zhu, para 164, 163.);
determine, based on a comparison of a characteristic of the first model with a characteristic of the second model, to skip a validation process for the second model (i.e., [0143] If multiple computer models are generated, one for each different set of parameter values, then the multiple computer models are generated (and, optionally, validated) in parallel or sequentially (determine, based on a comparison of a characteristic (set of parameter values characteristic) of the first model with a characteristic of the second model, to skip a validation process (skip an optional parallel validation process) for the second model). Zhu, para 143.); and
establish the second model as the primary model in the deployment to replace the first model in the deployment (i.e., [0151] Model deployment may be initiated based on user input. For example, FIG. 4F is a screenshot of an example user interface 460 that allows a user to specify one or more inputs (Figure 4F illustrates “Deploy model. !This will override the existing deployment and a Deploy button) . In this example, a user is able to specify (1) a project path where results of running the selected computer model will be stored and (2) a number of weeks for scoring. The former field is optional. If optional, then the same project path specified previously (in FIG. 4A) may be used to store the results. Alternatively, there may be default location or folder that system 100 creates for storing the results, in which case the project path field itself is optional. Zhu, Fig 4F, para 151.).
Regarding claim 8, which depends from claim 1 and recites:
wherein the at least one performance metric comprises at least one of speed of performance, accuracy, or computation resource utilization (i.e., Zhu teaches using performance metrics to compare which computer model is performing better and that performance metrics include accuracy and speed of learning performance. Zhu, para 164, 144, 157, 137, 139.).
Claims 10 and 17 recite methods that parallel the system of claims 1 and 8, respectively. Therefore, the analysis discussed above with respect to claims 1 and 8 also applies to claims 10 and 17, respectively. Accordingly, claims 10 and 17 are rejected based on substantially the same rationale as set forth above with respect to claims 1 and 8, respectively. More specifically regarding by one or more processors coupled to memory (i.e., Zhu, Fig 9, para 173-181.)
Claim 19 recites a non-transitory computer-readable medium that parallels the system of claim 1. Therefore, the analysis discussed above with respect to claim 1 also applies to claim 19. Accordingly, claim 19 is rejected based on substantially the same rationale as set forth above with respect to claim 1. More specifically regarding A non-transitory computer-readable medium storing processor-executable instructions that, when executed by one or more processors, causes the one or more processors to: (i.e., Zhu, Fig 9, para 173-181.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1, 10 and 19 above, and further in view of McKay et al. (Pub. No. US 2021/0192394 A1, published June 24, 2021) hereinafter McKay.
Regarding claim 2, which depends from claim 1 and recites:
wherein the characteristic comprises a blueprint.
Zhu teaches the system of claim 1, including the characteristic . Zhu does not specifically disclose comprises a blueprint.
However, McKay teaches in the field related to machine learning optimization. McKay, abstract, para 2. McKay, which is analogous to the claimed invention because McKay is directed to evaluating performance of challenger and champion models, teaches that, [0099] Training request conditioning pipeline 610 is provided for conditioning training data so that it can be used to train to challenger ML models (characteristic comprises a blueprint (pipeline blueprint, as described by applicant in paragraph 37 of specification as originally filed)). Conditioned training data 612 is accumulated in a training data storage device, and is retrieved from this storage device when needed to train one or more ML models. In this embodiment, training request conditioning pipeline 610 is part of the conditioning layer that further includes request conditioning pipeline 632 which conditions input requests, and inference conditioning pipeline 634 which conditions results (inferences) from the champion model. Each conditioning pipeline, if included, may comprise one or more conditioning components as specified in the ML labeler's configuration. McKay, para 99
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system for generating, comparing and deploying models of Zhu using the characteristic comprising a blueprint of McKay, with a reasonable expectation of success, in order to provide for optimizing the performance of ML model while reducing the cost. McKay, para 2-4, 99. This would have provided the advantages of building challenger machine learning models with appropriate training data.
Claim 11 recites a method that parallels the system of claim 2. Therefore, the analysis discussed above with respect to claim 2 also applies to claim 11. Accordingly, claim 11 is rejected based on substantially the same rationale as set forth above with respect to claim 2.
Claim 20 recites a non-transitory computer-readable medium that parallels the system of claim 2. Therefore, the analysis discussed above with respect to claim 2 also applies to claim 20. Accordingly, claim 20 is rejected based on substantially the same rationale as set forth above with respect to claim 2.
Claim(s) 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of Bright, Julian, Dynamic A/B testing for machine learning models with Amazon SageMaker MLOps projects, 31 pages, July 9,2021, retrieved at https://aws.amazon.com/blogs/machine-learning/dynamic-a-b-testing-for-machine-learning/dynamic-a-b-testing-for-machine-learning-models--with-amazon-sagemaker-mlops-projects/, hereinafter Bright.
Regarding claim 3, which depends from claim 1 and recites:
wherein the characteristic comprises a hyperparameter.
Zhu teaches the system of claim 1, including the characteristic . Zhu does not specifically disclose comprises a hyperparameter.
However, Bright teaches in the field related to comparing challenger champion models. Bright, page 2. Bright, which is analogous to the claimed invention because Bright is directed to comparing challenger champion modes, teaches that, Before deploying this model to all users, it’s a good idea to run this new or “challenger” model side-by-side with an existing “champion” model in an A/B test to find empirical evidence of the impact this new model has on your business metrics, such as click-through rate, conversion rate, or revenue. By collecting real-time feedback as your model is running, you can optimize how traffic is distributed between the champion and challenger models of the period of the test, which can often run for several weeks. Bright, page 2.
In this next step of the notebook, we run a SageMaker tuning job to improve on this initial model for our A/B test. The notebook is configured to run a total of nine jobs with three in parallel. This process takes approximately 30 minutes to complete. When this is complete, we can list these training jobs sorted by accuracy and see the hyperparameters (characteristic comprises a hyperparameter) identified in the best-performing training job.
PNG
media_image1.png
325
800
media_image1.png
Greyscale
These metrics are also visible on the Experiments tab of the SageMaker project, where you can view and compare results.
If we’re happy with the performance, we can register and approve this model in our challenger model group. Bright, page 19, 2.
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system generating, comparing and deploying models of Zhu using the characteristic comprising a hyperparameter of Bright, with a reasonable expectation of success, in order to provide confidence that this new challenger model is outperforming your previous model, and deploy this new model to all users, and begin the process again. Bright, page 2, 19. This would have provided the advantages of being provided with evidence showing which model is the highest performing model.
Claim 12 recites a method that parallels the system of claim 3. Therefore, the analysis discussed above with respect to claim 3 also applies to claim 12. Accordingly, claim 12 is rejected based on substantially the same rationale as set forth above with respect to claim 3.
Claim(s) 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of Ralhan (Pub. No. US 2019/0354809 A1, published November 21, 2019).
Regarding claim 4, which depends from claim 1 and recites:
wherein the characteristic comprises an order of operations.
Zhu teaches the system of claim 1, including the characteristic . Zhu does not specifically disclose comprises an order of operations.
However, Ralhan teaches in the field related to field of computational models. More specifically, embodiments disclosed herein relate to cataloging and evaluating multiple versions of computational models. Ralhan, para 2. Ralhan, which is analogous to the claimed invention because Ralhan is directed to evaluating models, teaches that, [0110] In some embodiments, computational model processes may include a champion-challenger process. In some embodiments, challenger models may be selected during model building 804. In general, a current computational model being used for a particular task may be considered a “champion” model. A challenger model may include a model using a different strategy, process (characteristic comprises an order of operations (characteristic of using different process and order of operations)), and/or the like than the champion model. A challenger model may be tested using various metrics to determine if performance is superior to a champion model. Ralhan, para 110.
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system generating, comparing and deploying models of Zhu using the characteristic comprising an order of operations of Ralhan, with a reasonable expectation of success, in order to provide for determining new computational models using a less complex and more efficient system. Ralhan, para 3, 110. This would have provided the advantages of being provided with evaluations of models with different processes, workflows, and operations.
Claim 13 recites a method that parallels the system of claim 4. Therefore, the analysis discussed above with respect to claim 4 also applies to claim 13. Accordingly, claim 13 is rejected based on substantially the same rationale as set forth above with respect to claim 4.
Claim(s) 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of Theurer et al. (Pub. No. US 2019/0286071 A1, published September 19, 2019) hereinafter Theurer.
Regarding claim 5, which depends from claim 1 and further recites:
determine, based on the first model, one or more performance metrics to use for the comparison of the first model with the second model; provide the determined one or more performance metrics for presentation via a prompt output by a graphical user interface rendered on a client device; receive, responsive to the prompt, a selection of the at least one performance metric from the one or more performance metrics provided via the prompt.
Zhu teaches the system of claim 1 from which claim 5 depends, including determine, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric; determine. Zhu teaches [0164] One or more metrics are used to compare which computer model is performing better. An example metric is conversation rate. A conversion rate may be calculated by dividing the number of conversions by volume (or the number of predictions). If the old computer model is associated with a higher conversion rate relative to the conversion rate associated with the new computer model, then the new computer model is dropped or ceases to be used. If the new computer model results in a higher conversion rate relative to the conversion rate of the old computer model, then the new computer model replaces the old computer model (determine, based on the first model, one or more performance metrics (conversion metrics) to use for the comparison of the first model with the second model; the. Zhu, para 164, 163. Zhu does not explicitly disclose provide performance metrics for presentation via a prompt output by a graphical user interface rendered on a client device; receive, responsive to the prompt, a selection of the at least one performance metric from the one or more performance metrics provided via the prompt.
However, Theurer teaches in the field related to relate to selection of an algorithm and, more particularly, to systems and methods to select a potential replacement algorithm based on algorithm execution context information. Theurer, para 1. Theurer, which is analogous to the claimed invention because Theurer is directed to champion challenger algorithms, teaches that, [0041] In some cases, a display may provide information to an operator or administrator and/or allow him or her to make adjustments to the system. For example, FIG. 7 illustrates a champion/challenger display 700 that might utilize an interactive graphical user interface. The display 700 might comprise a graphical overview 710 of a champion/challenger system including a real environment, an algorithm selection platform, an available algorithm catalog, an algorithm evaluation platform (provide performance metrics (algorithm performance metrics, para 48) for presentation via a prompt output by a graphical user interface rendered on a client device; receive, responsive to the prompt, a selection (view performance metric information prompt without adjustment selection) of the at least one performance metric from the one or more performance metrics provided via the prompt), etc. Selection of an element on the display 700 (e.g., via a touch screen or computer mouse pointer 720) might result in further information about that element being presented (e.g., in a pop-up window) and, in some cases, allow for an adjustment to be made in connection with that element (receive, responsive to the prompt, a selection (performance metric element information prompt without adjustment selection) of the at least one performance metric from the one or more performance metrics provided via the prompt). In addition, selection of a “Replace” icon 730 might trigger movement of a potential replacement algorithm to the real environment. Theurer, Fig 7, para 41, 48. [0048] The algorithm identifier 902 may be, for example, a unique alphanumeric code identifying code, formula, applications, etc. that might be executed in a real or shadow environment. The metadata 904 might be any information that describes the algorithm, including, for example, inputs, outputs, resource requirements, performance metrics (provide performance metrics (algorithm performance metrics, para 48) for presentation via a prompt output by a graphical user interface rendered on a client device; receive, responsive to the prompt, a selection (view performance metric information prompt without adjustment selection) of the at least one performance metric from the one or more performance metrics provided via the prompt), etc. The context 906 might indicate any condition that impacts operation of the algorithm (e.g., time of day, weather, location, etc.). The status 908 might indicate if the algorithm is currently the champion, is being evaluated, is not suitable to replace a current algorithm, etc. Theurer, Fig 7, para 48, 41.
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system generating, comparing and deploying models, including determine, based on the first model, one or more performance metrics to use for the comparison of the first model with the second model; the
Claim 14 recites a method that parallels the system of claim 5. Therefore, the analysis discussed above with respect to claim 5 also applies to claim 14. Accordingly, claim 14 is rejected based on substantially the same rationale as set forth above with respect to claim 5.
Claim(s) 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of Shrivastava et al. (Pub. No. US 2021/0241182 A1, published August 5, 2021) hereinafter Shrivastava.
Regarding claim 6, which depends from claim 1 and further recites:
detect, subsequent to deployment of the second model as the primary model, an error with output or performance of the second model; and return, responsive to the detection, the first model as the primary model in the deployment.
Zhu teaches the system of claim 1 from which claim 6 depends, including the deployment of the second model as the primary model and the first model. Zhu does not explicitly disclose detect, subsequent to deployment of the second model as the primary model, an error with output or performance of the second model; and return, responsive to the detection, the first model as the primary model in the deployment.
However, Shrivastava teaches in the field related to relate to building, monitoring, evaluating, and rebuilding of machine learning models. Shrivastava, para 1. Shrivastava, which is analogous to the claimed invention because Shrivastava is directed to deploying and designating a champion and challenger model, teaches that, In another alternative embodiment, during the deployment cycle, a new champion may be designated from among the at least one challenger, and the replaced champion becomes one of challenger(s). Shrivastava, para 45. In some embodiments, the business unit may desire to redesignate among the plurality of machine learning models a new champion. For example, the business unit 704 may discover that one of the challenger(s) 710 produces more desirable prediction data (detect, subsequent to deployment of the second model as the primary model (challenger replaced champion as primary model), an error with output or performance of the second model (discover and detect less desirable prediction error with output or performance of the challenger model that replaced champion); and return, responsive to the detection, the first model as the primary model in the deployment (redesignate and return old champion challenger as new champion). Shrivastava, para 76, 45.
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system generating, comparing and deploying models of Zhu using the feature to detect, subsequent to deployment of the second model as the primary model, an error with output or performance of the second model and return, responsive to the detection, the first model as the primary model in the deployment of Shrivastava, with a reasonable expectation of success, in order to provide business units to avoid operating with outdated predictions, which may prevent business units from achieving optimal results. Shrivastava, para 3, 2, 45, 76. This would have provided the advantages of being provided with improved models and results.
Claim 15 recites a method that parallels the system of claim 6. Therefore, the analysis discussed above with respect to claim 6 also applies to claim 15. Accordingly, claim 15 is rejected based on substantially the same rationale as set forth above with respect to claim 6.
Claim(s) 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of SAS Institute Inc. 2009, SAS® Model Manager 2.2: User’s Guide. Cary, NC: SAS Institute Inc., pages i-xiv, 1-316, retrieved at https://support.sas.com/documentation/onlinedoc/modelmgr/ug22.pdf, hereinafter SAS.
Regarding claim 7, which depends from claim 1 and further recites:
provide, responsive to the determination that the second model performs better than the first model and to skip the validation process, a prompt to a client device to request authorization to establish the second model as the primary model in the deployment; and establish the second model as the primary model in the deployment responsive to receiving authorization from the client device via the prompt.
Zhu teaches the system of claim 1 from which claim 7 depends, including responsive to determination that the second model performs better than the first model and to skip the validation process, establish the second model as the primary model in the deployment. Zhu does not explicitly disclose provide, a prompt to a client device to request authorization to establish the second model as the primary model in the deployment; and establish the second model as the primary model in the deployment responsive to receiving authorization from the client device via the prompt.
However, SAS teaches in the field related to Model Management. SAS, title, page i. SAS, which is analogous to the claimed invention because SAS is directed managing models, including challenger and champion models, teaches the SAS Model Manager interface, toolbar and menus and template views. The SAS Model Manager toolbar provides shortcuts to perform such tasks as creating and organizing a project, importing a model file, and selecting a champion model. For more about information about SAS Model Manager toolbars, see SAS Model Manager Menus, on page 15. The SAS Model Manager menus enable you to perform general tasks … The menus also enable you to perform tasks are specific to the SAS Model Manager such as … selecting a champion model (providing a prompt to an approver client device to request approval and authorization to set and establish the second model as the primary champion model in the deployment and establish the second model as the primary champion model in the deployment, responsive to receiving the approvers mark authorization from the approver client device interface template view via the prompt)). For more information about SAS Model Manager Menus see SAS Model Manager Menu, page 16. SAS pages 10,12-16, 9-18, 149-152, 171.
SAS Model Manager Toolbar and Menus, 9 Set Champion Model/Default Version selects the champion model or the default version. When you highlight a model in the Project Tree, click the Set Champion Model/Default Version button to promote the model to champion status over all other models in the version folder that contains the models. .. For more information see Deploying Models (providing a prompt to an approver client device to request approval and authorization to set and establish the second model as the primary champion model in the deployment and establish the second model as the primary champion model in the deployment, responsive to receiving the approvers mark authorization from the approver client device interface template view via the prompt) on page 149. SAS chapter 2 pages 15-16, 10,12-16, 9-18, 149-152, 171.
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system generating, comparing and deploying models, including responsive to determination that the second model performs better than the first model and to skip the validation process, establish the second model as the primary model in the deployment of Zhu using the feature provide, a prompt to a client device to request authorization to establish the second model as the primary model in the deployment; and establish the second model as the primary model in the deployment responsive to receiving authorization from the client device via the prompt of SAS, with a reasonable expectation of success, in order to provide for manage models, and assess candidate models. SAS, pages 3-4, 9-18, 149-153. This would have provided the advantages of providing user friendly interfaces, toolbars and menus for prompting and approving the assessing and a setting champion model for deployment by a model manager.
Claim 16 recites a method that parallels the system of claim 7. Therefore, the analysis discussed above with respect to claim 7 also applies to claim 16. Accordingly, claim 16 is rejected based on substantially the same rationale as set forth above with respect to claim 7.
Claim(s) 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu as applied to claims 1 and 10 above, and further in view of Wang et al. (Pub. No. US 2022/0374781 A1, filed May 4, 2021) hereinafter Wang.
Regarding claim 9, which depends from claim 1 and further recites:
determine the first model performs better than the second model based on the second model generating output with a same accuracy faster than the first model.
Zhu teaches the system of claim 1, including the first model and second model characteristic. Zhu teaches using metrics to compare which computer model is performing better and metrics including accuracy and speed of learning. Zhu, para 164, 144, 157, 137, 139. Zhu does not specifically disclose determine the first model performs better than the second model based on the second model generating output with metrics better than the first model.
However, Wang teaches in the field related to configurations for a machine learning challenger champion model. Wang, abstract, para 1-2. Wang, which is analogous to the claimed invention because Wang is directed to comparing machine learning challenger champion models, teaches that, [0051] In some examples, the performance of a challenger may be compared to the champion. For example, at 620, the challenger may be promoted to champion using a better test as indicated in equation 1, where a probabilistic lower and upper bound, are denoted by L.sub.c,t and L.sub.c,t respectively, and ε.sub.C,t is the gap. Wang, Fig 6, para 51, 52. [0052] That is, the challenger must be better than the champion by a certain amount, or gap. This ensures that the challenger is promoted into a champion only when it is sufficiently better than the old champion (determine the first model performs better than the second model based on the second model generating output with metrics better than the first model), thereby avoiding constant challenger/champion switching when the challenger is slightly better than the champion. If the challenger is promoted to champion, then the method may proceed to 606, where the configuration oracle may generate new models for the challenger pool based on the new champion. Wang, para 52, 51.
It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the system generating, comparing and deploying models, including using metrics to compare which computer model is performing better and metrics including accuracy and speed of learning of Zhu using the feature to determine the first model performs better than the second model based on the second model generating output with metrics better than the first model of Wang, with a reasonable expectation of success, in order to avoid constant challenger/champion switching when the challenger is slightly better than the champion. Wang, para 52, 51, 2-3. This would have provided the advantages of monitoring and managing deployed models performance and ensuring performance improvement is worth changing model deployment.
Claim 18 recites a method that parallels the system of claim 9. Therefore, the analysis discussed above with respect to claim 9 also applies to claim 18. Accordingly, claim 18 is rejected based on substantially the same rationale as set forth above with respect to claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US-20170193392-A1, US-20220300268-A1, US-20190213475-A1, US-9489630-B2.
Nigenda, David et al. Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models, arXiv:2111.13657v1 [cs.LG], November 26, 2021, https://doi.org/10.48550/arXiv.2111.13657.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BARBARA LEVEL whose telephone number is (303)297-4748. The examiner can normally be reached Monday through Friday 8:00 AM - 5:00 PM MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 27