DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is in response to the following communication: Amendment to application No. 16/537,215 filed on 09/08/2025.
3. Claims 1, 4-11 and 17-20 have been amended.
Claims 1-20 now remain pending.
Claims 1, 11 and 20 are independent claims.
Claim Objections
4: Claims 8 is objected to because of the following informalities:
Claim 8 grammatical error on line 3, “updating the of the one or more…”, examiner suggest using “updating
Appropriate correction is required.
Response to Arguments
5. Applicant’s arguments with respect to newly amended independent claims 1, 11 and 20 claims 2-10 and 12-19 on pages 7-12 of the response have been fully considered but they are not persuasive and are moot in view of the new ground(s) of rejection - see Gold (Art of record) and Hopkins (Art newly made of record) as applied below, as they further teach such use.
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 1, 4, 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gold et al., US 2019/0121889 (hereinafter Gold) in view of Hopkins et al., U.S. Patent No. 10,735,299 (hereinafter Hopkins).
In regards to claim 1, Gold teaches:
A method, comprising: determining that a second version of one or more neural networks executed using a first set of hardware compute resources exceeds an improvement benchmark compared to a first version of the one or more neural networks executed using the first set of hardware compute resources (p. 51, [0371], see FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment… providing a recommendation that a particular user should switch from one machine learning model to another machine learning model in their production environment, by automatically rolling back to a previous version of a machine learning model if it is determined to be preferred over a subsequent version of the machine learning model, and so on. In the example method depicted in FIG. 29, identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models can include evaluating (2904) one or more machine learning models utilizing a predetermined model test. The predetermined model test may be embodied, for example, as an A/B testing model in which two machine learning models are compared) (emphasis added).
deploying the second version of the one or more neural networks to one or more web-based service clients based, at least in part, on the determination that the second version exceeds the improvement benchmark (p. 18, [0149], see exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster), (p. 51, [0371], see the example method depicted in FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment), (p. 11, [0107], see storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet) and (p. 14, [0128], see cloud services provider 302 may be configured to provide a variety of services to the storage system 306 and users of the storage system 306 … the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider 302 offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider 302 offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306, providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers) (emphasis added). It is noted that production clusters offered by cloud services providers connected to clients using Ethernet and lass and Paas service models offering execution environment, database, web server to clients is very much the same as such a web-based service.
Gold doesn’t explicitly teach:
and on a determination that the one or more web-based service clients are executed using a second set of hardware compute resources that have second one or more processors corresponding to the first set of hardware compute resources.
However, Hopkins teaches such use: (Abstract, see a mapping of application operation classifications to server characteristics suited to the application operation classifications is maintained. Multiple servers currently available to process at least a portion of the client software application may be monitored, and each of the multiple servers may be characterized according to their performance and resources. The classifications of operations of the analyzed application may be compared to the characteristics of the multiple servers currently available using the mapping, and a server may be selected based on the comparison), (column 15, lines 11-17, see the components of the computer system/server 512 may include, but are not limited to, one or more processors or processing units 516, a system memory 528, and a bus 518 that couples various system components including system memory 528 to processor 516), (column 2, lines 65-67, see FIG. 5 is a block diagram of an embodiment of a computer system or cloud server in which one or more aspects of the present invention may be implemented) and (column 21, lines 27-36, see Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings).
Gold and Hopkins are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold and Hopkins before him or her, to modify the system of Gold to include the teachings of Hopkins, as a system for management of client connections, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to handle client application with uniform performance, as suggested by Hopkins (column 21, lines 27-36, column 23, lines 26-28).
In regards to claim 4, Gold teaches:
the deployed second version of the one or more neural networks meet higher improvement benchmark thresholds than one or more neural networks already used by the one or more web-based service client (p. 51, [0371], see FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment… providing a recommendation that a particular user should switch from one machine learning model to another machine learning model in their production environment, by automatically rolling back to a previous version of a machine learning model if it is determined to be preferred over a subsequent version of the machine learning model, and so on. In the example method depicted in FIG. 29, identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models can include evaluating (2904) one or more machine learning models utilizing a predetermined model test. The predetermined model test may be embodied, for example, as an A/B testing model in which two machine learning models are compared).
In regards to claim 11, Gold teaches:
A non-transitory computer-readable medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising: determining that a second version of one or more neural networks executed using a first set of hardware compute resources exceeds an improvement benchmark compared to a first version of the one or more neural networks executed using the first set of hardware compute resources (p. 51, [0371], see FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment… providing a recommendation that a particular user should switch from one machine learning model to another machine learning model in their production environment, by automatically rolling back to a previous version of a machine learning model if it is determined to be preferred over a subsequent version of the machine learning model, and so on. In the example method depicted in FIG. 29, identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models can include evaluating (2904) one or more machine learning models utilizing a predetermined model test. The predetermined model test may be embodied, for example, as an A/B testing model in which two machine learning models are compared) (emphasis added).
deploying the second version of the one or more neural networks to one or more web-based service clients based, at least in part, on the determination that the second version exceeds the improvement benchmark (p. 18, [0149], see exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster), (p. 51, [0371], see the example method depicted in FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment), (p. 11, [0107], see storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet) and (p. 14, [0128], see cloud services provider 302 may be configured to provide a variety of services to the storage system 306 and users of the storage system 306 … the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider 302 offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider 302 offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306, providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers) (emphasis added). It is noted that production clusters offered by cloud services providers connected to clients using Ethernet and lass and Paas service models offering execution environment, database, web server to clients is very much the same as such a web-based service.
Gold doesn’t explicitly teach:
and on a determination that the one or more web-based service clients are executed using a second set of hardware compute resources that have second one or more processors corresponding to the first set of hardware compute resources.
However, Hopkins teaches such use: (Abstract, see a mapping of application operation classifications to server characteristics suited to the application operation classifications is maintained. Multiple servers currently available to process at least a portion of the client software application may be monitored, and each of the multiple servers may be characterized according to their performance and resources. The classifications of operations of the analyzed application may be compared to the characteristics of the multiple servers currently available using the mapping, and a server may be selected based on the comparison), (column 15, lines 11-17, see the components of the computer system/server 512 may include, but are not limited to, one or more processors or processing units 516, a system memory 528, and a bus 518 that couples various system components including system memory 528 to processor 516), (column 2, lines 65-67, see FIG. 5 is a block diagram of an embodiment of a computer system or cloud server in which one or more aspects of the present invention may be implemented) and (column 21, lines 27-36, see Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings).
Gold and Hopkins are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold and Hopkins before him or her, to modify the system of Gold to include the teachings of Hopkins, as a system for management of client connections, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to handle client application with uniform performance, as suggested by Hopkins (column 21, lines 27-36, column 23, lines 26-28).
In regards to claim 20, Gold teaches:
A system, comprising: a memory storing instructions; and one or more processors that execute the instructions to perform a method comprising: determining that a second version of one or more neural networks executed using a first set of hardware compute resources exceeds an improvement benchmark compared to a first version of the one or more neural networks executed using the first set of hardware compute resources (p. 51, [0371], see FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment… providing a recommendation that a particular user should switch from one machine learning model to another machine learning model in their production environment, by automatically rolling back to a previous version of a machine learning model if it is determined to be preferred over a subsequent version of the machine learning model, and so on. In the example method depicted in FIG. 29, identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models can include evaluating (2904) one or more machine learning models utilizing a predetermined model test. The predetermined model test may be embodied, for example, as an A/B testing model in which two machine learning models are compared) (emphasis added).
deploying the second version of the one or more neural networks to one or more web-based service clients based, at least in part, on the determination that the second version exceeds the improvement benchmark (p. 18, [0149], see exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster), (p. 51, [0371], see the example method depicted in FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment), (p. 11, [0107], see storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet) and (p. 14, [0128], see cloud services provider 302 may be configured to provide a variety of services to the storage system 306 and users of the storage system 306 … the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider 302 offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider 302 offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306, providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers) (emphasis added). It is noted that production clusters offered by cloud services providers connected to clients using Ethernet and lass and Paas service models offering execution environment, database, web server to clients is very much the same as such a web-based service.
Gold doesn’t explicitly teach:
and on a determination that the one or more web-based service clients are executed using a second set of hardware compute resources that have second one or more processors corresponding to the first set of hardware compute resources.
However, Hopkins teaches such use: (Abstract, see a mapping of application operation classifications to server characteristics suited to the application operation classifications is maintained. Multiple servers currently available to process at least a portion of the client software application may be monitored, and each of the multiple servers may be characterized according to their performance and resources. The classifications of operations of the analyzed application may be compared to the characteristics of the multiple servers currently available using the mapping, and a server may be selected based on the comparison), (column 15, lines 11-17, see the components of the computer system/server 512 may include, but are not limited to, one or more processors or processing units 516, a system memory 528, and a bus 518 that couples various system components including system memory 528 to processor 516), (column 2, lines 65-67, see FIG. 5 is a block diagram of an embodiment of a computer system or cloud server in which one or more aspects of the present invention may be implemented) and (column 21, lines 27-36, see Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings).
Gold and Hopkins are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold and Hopkins before him or her, to modify the system of Gold to include the teachings of Hopkins, as a system for management of client connections, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to handle client application with uniform performance, as suggested by Hopkins (column 21, lines 27-36, column 23, lines 26-28).
8. Claims 2, 3, 12, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gold in view of Hopkins in view of Brueckner et al., US 2016/0078361 (hereinafter Brueckner).
In regards to claims 1 and 11, the rejections above are incorporated respectively.
In regards to claim 2, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
the one or more neural networks are to be used to perform inferencing operations and to output inferenced data to a software application installed on a client computing device.
However, Brueckner teaches such use: (Fig. 1, see process arrow flow from Raw data 130, Input record handlers 160, Feature Processors 162, ML Algorithm implementations 166, Workload distribution 175, server pools 185, MlS artifact repository 120, Clients 164, Machine learning service 189) and (p. 6, [0064}, see the output 116 of the feature processing transformations may in turn be used as input for a selected machine learning algorithm 166, which may be executed in accordance with algorithm parameters 154 using yet another set of resources from pool 185. A wide variety of machine learning algorithms may be supported natively by the MLS libraries, including for example random forest algorithms, neural network algorithms, stochastic gradient descent algorithms, and the like).
Gold, Hopkins and Brueckner are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Brueckner before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Brueckner, as a system for training machine learning models, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data transformations to apply to data input, , as suggested by Brueckner (p. 6, [0064}, p 50, [0562-0564 ).
In regards to claim 3, Gold teaches:
Improvement benchmark is indicative of performance of the one or more neural networks (p. 51, [0371], see FIG. 29 also includes identifying (2902), from amongst a plurality of machine learning models, a preferred machine learning model. Identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models may be carried out, for example, by comparing a plurality of machine learning models to identify which machine learning model performed the best relative to a predetermined set of criteria (e.g., most accurate, quickest time to achieve a particular accuracy threshold, and so on. In such an example, a plurality of metrics may be used in a weighted or unweighted fashion to identify (2902) the preferred machine learning model. In such an example, the artificial intelligence infrastructure (2402) may be configured to use such information, for example, by automatically pushing the preferred machine learning model into a production environment… providing a recommendation that a particular user should switch from one machine learning model to another machine learning model in their production environment, by automatically rolling back to a previous version of a machine learning model if it is determined to be preferred over a subsequent version of the machine learning model, and so on. In the example method depicted in FIG. 29, identifying (2902) a preferred machine learning model from amongst a plurality of machine learning models can include evaluating (2904) one or more machine learning models utilizing a predetermined model test. The predetermined model test may be embodied, for example, as an A/B testing model in which two machine learning models are compared).
In regards to claim 12, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
the one or more neural networks and a software application using the one or more neural networks are installed on a client computing device.
However, Brueckner teaches such use: (Fig. 1, see process arrow flow from Raw data 130, Input record handlers 160, Feature Processors 162, ML Algorithm implementations 166, Workload distribution 175, server pools 185, MlS artifact repository 120, Clients 164, Machine learning service 189) and (p. 6, [0064}, see the output 116 of the feature processing transformations may in turn be used as input for a selected machine learning algorithm 166, which may be executed in accordance with algorithm parameters 154 using yet another set of resources from pool 185. A wide variety of machine learning algorithms may be supported natively by the MLS libraries, including for example random forest algorithms, neural network algorithms, stochastic gradient descent algorithms, and the like).
Gold, Hopkins and Brueckner are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Brueckner before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Brueckner, as a system for training machine learning models, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data transformations to apply to data input, , as suggested by Brueckner (p. 6, [0064}, p 50, [0562-0564 ).
In regards to claim 17, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
the one or more neural networks perform inferencing operations and provide inferenced data to a software application by at least: providing input data by the software application to the one or more neural networks, processing the input data by the one or more neural networks to generate the inferenced data, and outputting the inferenced data to the software application.
However, Brueckner teaches such use: (Fig. 1, see process arrow flow from Raw data 130, Input record handlers 160, Feature Processors 162, ML Algorithm implementations 166, Workload distribution 175, server pools 185, MlS artifact repository 120, Clients 164, Machine learning service 189) and (p. 6, [0064}, see the output 116 of the feature processing transformations may in turn be used as input for a selected machine learning algorithm 166, which may be executed in accordance with algorithm parameters 154 using yet another set of resources from pool 185. A wide variety of machine learning algorithms may be supported natively by the MLS libraries, including for example random forest algorithms, neural network algorithms, stochastic gradient descent algorithms, and the like).
Gold, Hopkins and Brueckner are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Brueckner before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Brueckner, as a system for training machine learning models, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data transformations to apply to data input, , as suggested by Brueckner (p. 6, [0064}, p 50, [0562-0564 ).
9. Claims 5-10, 15-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gold in view of Hopkins in view of Novielli et al., U.S. 2019/0278870 (hereinafter Novielli).
In regards to claims 1 and 11 the rejections above are incorporated respectively.
In regards to claim 5, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
further comprising generating the second version of the one or more neural networks by at least retraining the one or more neural networks.
However, Novielli teaches such use: (p. 3, [0050], see due to the number of queries processed each second by the search system 104 and the amount of feedback received by the machine learning model creation system 106, a sufficient amount of data to retrain the model can be received very quickly. Thus, the machine learning model can be retrained, a new model created… In one embodiment, the trigger event can be time so that the machine learning model is updated (i.e., an existing model retrained or a new model created) on a periodic or a peridoc schedule. In another embodiment, the trigger even can be machine learning model performance so that the machine learning model is updated when the performance falls below a threshold).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 6, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
automatically distributing the one or more updates to the one or more neural networks by at least distributing one or more updated parameters the one or more neural networks to a client computing device.
However, Novielli teaches such use: (p. 3, [0050] in yet another embodiment, performance of a test machine learning model triggers updating the machine learning model to a wider group of users (see the A/B testing discussion below). Thus, when a test machine learning model (i.e., model A) performance exceeds another deployed model (i.e., model B), the A model can be deployed to replace the B model. In other embodiments a combination of trigger events can be used) and (p. 4, [0053], see the model creation process 122 can distribute a whole new model, updated coefficients for an existing model, or both… Coefficients in this context are the weights or other parameters that turn an untrained learning model into a trained learning model) and (p. 4, [0055], see various machine learning models that receive an input list and output an output list can be used in embodiments of the current disclosure … appropriate models are those that are able to classify and/or rank a list of input items, including supervised, unsupervised and semi-supervised models. Various models fall into the categories such as deep learning models, ensemble models, neural networks) (emphasis added).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 7, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
the automatically distributing the one or more updates to the one or more neural networks comprises distributing one or more updated hyperparameter adjustments for the one or more neural networks to the client computing device.
However, Novielli teaches such use: (p. 3, [0050] in yet another embodiment, performance of a test machine learning model triggers updating the machine learning model to a wider group of users (see the A/B testing discussion below). Thus, when a test machine learning model (i.e., model A) performance exceeds another deployed model (i.e., model B), the A model can be deployed to replace the B model. In other embodiments a combination of trigger events can be used) and (p. 4, [0053], see the model creation process 122 can distribute a whole new model, updated coefficients for an existing model, or both… Coefficients in this context are the weights or other parameters that turn an untrained learning model into a trained learning model).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 8, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
the automatically distributing the one or more updates to the one or more neural networks comprises updating the one or more neural networks of the client computing device with at least one of a layer substitution, a layer fusing, or input stacking.
However, Novielli teaches such use: (p. 3, [0050] in yet another embodiment, performance of a test machine learning model triggers updating the machine learning model to a wider group of users (see the A/B testing discussion below). Thus, when a test machine learning model (i.e., model A) performance exceeds another deployed model (i.e., model B), the A model can be deployed to replace the B model. In other embodiments a combination of trigger events can be used).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 9, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
generating the second version of the one or more neural networks wherein automatically.
However, Novielli teaches such use: (p. 3, [0050] in yet another embodiment, performance of a test machine learning model triggers updating the machine learning model to a wider group of users (see the A/B testing discussion below). Thus, when a test machine learning model (i.e., model A) performance exceeds another deployed model (i.e., model B), the A model can be deployed to replace the B model. In other embodiments a combination of trigger events can be used) and (p. 4, [0055], see various machine learning models that receive an input list and output an output list can be used in embodiments of the current disclosure… appropriate models are those that are able to classify and/or rank a list of input items, including supervised, unsupervised and semi-supervised models. Various models fall into the categories such as deep learning models, ensemble models, neural networks) (emphasis added).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 10, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
automatically distributing second version the one or more neural networks to the one or more web-based service client when an updated version of the one or more neural networks meets or exceeds one or more thresholds for improvement relating to at least one of accuracy, quality, or performance.
However, Novielli teaches such use: (p. 3, [0050] In another embodiment, the trigger even can be machine learning model performance so that the machine learning model is updated when the performance falls below a threshold such as when the number or rate of incorrect prefetches exceeds a threshold and/or the number or rate of correct prefetches falls below a threshold. In yet another embodiment, performance of a test machine learning model triggers updating the machine learning model to a wider group of users (see the A/B testing discussion below). Thus, when a test machine learning model (i.e., model A) performance exceeds another deployed model (i.e., model B), the A model can be deployed to replace the B model. In other embodiments a combination of trigger events can be used).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 15, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
a software application is a voice recognition application that uses the one or more neural networks.
However, Novielli teaches such use: (p. 4, [0063], see input is received in operation 306. As noted above, the input can be in any format such as text, voice, gesture, and so forth).
Gold, Hopkins and Novielli are analogous art because they are from the same field of endeavor, AI model updates.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Gold, Hopkins and Novielli before him or her, to modify the system of Gold and Hopkins, in particular Gold to include the teachings of Novielli, as a system for loading search results using machine learning, and accordingly it would enhance the system of Gold, which is focused on ensuring reproducibility in an artificial intelligence infrastructure, because that would provide Gold with the ability to utilize data collected from user devices as feedback to train machine learning models, as suggested by Novielli (p. 3, [0050], p. 11, [0209]).
In regards to claim 16, Gold and Hopkins, in particular Gold doesn’t explicitly teach:
the one or more neural networks generates inferenced data that includes one or more audio-related inferences, the one or more audio-related inferences being at least one of a language translation or a voice recognized command.
However, Novielli teaches such use: (p. 4, [0063], see input is received in operation 306. As noted above, the input can be in any format such as text, voice, gesture, and so forth).
Gold, Hopkins and Novielli are analogous art be