DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1 – 20 are pending for examination.
Examiner’s Note
The prior art rejection below cites particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
As to claim 1, the claim recites
A method, comprising:
accessing a first machine learning task through a research workspace, the research workspace comprising a plurality of virtualized computing resource units, the first machine learning task having a first data size;
executing the first machine learning task via a subset of the plurality of virtualized computing resource units;
associating the first machine learning task with the subset of the virtualized computing resource units used and an amount of execution time;
accessing a second machine learning task through a production workspace, the production workspace comprising a plurality of physical computing resource units, the second machine learning task having a second data size greater than the first data size, wherein the second machine learning task and the first machine learning task have a same algorithm; and
allocating, during an execution of the second machine learning task, a subset of the physical computing resource units to perform the execution of the second machine learning task, wherein the allocating is at least in part based on an association between the first machine learning task, the subset of the virtualized computing resource units used during an execution of the first machine learning task in the research workspace, and the amount of execution time during the execution of the first machine learning task in the research workspace.
Step 2A:
Prong 1: the limitations of “associating the first machine learning task with the subset of the virtualized computing resource units used and an amount of execution time; wherein the second machine learning task and the first machine learning task have a same algorithm”; and
“allocating, during an execution of the second machine learning task, a subset of the physical computing resource units to perform the execution of the second machine learning task, wherein the allocating is at least in part based on an association between the first machine learning task, the subset of the virtualized computing resource units used during an execution of the first machine learning task in the research workspace, and the amount of execution time during the execution of the first machine learning task in the research workspace” are all functions that can be reasonably performed in the human mind including observations and with or without the use of pen and paper through observation, evaluation, judgement and opinion.
Prong 2: the additional element of "accessing a first machine learning task through a research workspace, the research workspace comprising a plurality of virtualized computing resource units, the first machine learning task having a first data size” and “accessing a second machine learning task through a production workspace, the production workspace comprising a plurality of physical computing resource units, the second machine learning task having a second data size greater than the first data size” mere data gathering which the courts have held to be insignificant extra-solution activity (see MPEP 2106.05(g)).
The additional elements of “executing the first machine learning task via a subset of the plurality of virtualized computing resource units” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea.
Thus, these additional elements do not integrate the judicial exception into a practical application.
Step 2B the additional element of "accessing a first machine learning task through a research workspace, the research workspace comprising a plurality of virtualized computing resource units, the first machine learning task having a first data size” and “accessing a second machine learning task through a production workspace, the production workspace comprising a plurality of physical computing resource units, the second machine learning task having a second data size greater than the first data size” merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
The additional elements of “executing the first machine learning task via a subset of the plurality of virtualized computing resource units” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea.
Accordingly, the additional elements do not amount to significantly more than the abstract idea.
As to claim 2, The method of claim 1, wherein each virtualized computing resource units corresponds to a portion of a physical hardware processor or a portion of a physical electronic memory merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
As to claim 3. The method of claim 1, wherein the physical computing resource units comprise computing resources in a decentralized environment merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
As to claim 4. The method of claim 1, wherein the first machine learning task is one of a plurality of machine learning tasks submitted to the research workspace, and wherein the method further comprises: filtering out duplicative ones of the machine learning tasks before submitting a rest of the machine learning tasks including the first machine learning task to the research workspace are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 5. The method of claim 1, wherein the allocating comprises:
dividing each of the physical computing resource units into a plurality of blocks; and allocating one or more blocks from the subset of the physical computing resource units for the execution of the second machine learning task;
and wherein the method further comprises monitoring, in the production workspace, which of the one or more blocks have been allocated and which other blocks of the plurality of blocks are idle are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 6. The method of claim 1, wherein the associating comprises recording, for the first machine learning task via an electronic table maintained within the research workspace, the subset of the virtualized computing resource units used and the amount of execution time for each individual virtualized computing resource unit are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 7. The method of claim 1, wherein the associating further comprises associating the first machine learning task with an idle rate for each of the virtualized computing resource units in the subset are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 8. The method of claim 1, further comprising: before the accessing the second machine learning task through the production workspace, promoting the first machine learning task to be production-ready are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 9. The method of claim 1, wherein the allocating is performed at least in part using a scheduler software program within the production workspace are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 10. The method of claim 1, wherein the allocating is performed by extrapolating, based on a difference between the first data size and the second data size and further based on the subset of the virtualized computing resource units and the amount of execution time used during the execution of the first machine learning task in the research workspace, how much time or how much of the physical computing resource units are needed to complete the execution of the second machine learning task are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 11. The method of claim 10, wherein an amount of time needed to complete the execution of the second machine learning task is defined according to a Service-Level Agreement (SLA), and wherein the extrapolating further comprises calculating how much of the physical computing resource units are needed to complete the execution of the second machine learning task in order to meet the amount of time defined according to the SLA are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 12. A system comprising:
a processor; and
a non-transitory computer-readable medium having stored thereon instructions that are executable by the processor to cause the system to perform operations comprising:
receiving, via a non-production workspace, a data analysis job;
executing the data analysis job in the non-production workspace via a plurality of virtualized computing resource units, the virtualized computing resource units each providing a fraction of processing power offered by physical computing resources that are located outside the non-production workspace;
recording statistics of an execution of the data analysis job in the non-production workspace;
sending the data analysis job to a production workspace based on a determination that the data analysis job is production-ready;
determining, based on the statistics recorded during the execution of the data analysis job in the non-production workspace, how the physical computing resources should be allocated to execute the data analysis job in the production workspace; and
allocating the physical computing resources based on the determining.
Step 2A:
Prong 1: the limitations of “the virtualized computing resource units each providing a fraction of processing power offered by physical computing resources that are located outside the non-production workspace”; and “based on a determination that the data analysis job is production-ready;
determining, based on the statistics recorded during the execution of the data analysis job in the non-production workspace, how the physical computing resources should be allocated to execute the data analysis job in the production workspace; and allocating the physical computing resources based on the determining” are all functions that can be reasonably performed in the human mind including observations and with or without the use of pen and paper through observation, evaluation, judgement and opinion.
Prong 2: the additional elements of “receiving, via a non-production workspace, a data analysis job”; “recording statistics of an execution of the data analysis job in the non-production workspace; sending the data analysis job to a production workspace” merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
The additional elements of “executing the first machine learning task via a subset of the plurality of virtualized computing resource units” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea.
Thus, these additional elements do not integrate the judicial exception into a practical application.
Step 2B the additional element of " the additional elements of “receiving, via a non-production workspace, a data analysis job”; “recording statistics of an execution of the data analysis job in the non-production workspace; sending the data analysis job to a production workspace” merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
The additional elements of “executing the first machine learning task via a subset of the plurality of virtualized computing resource units” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea.
Accordingly, the additional elements do not amount to significantly more than the abstract idea.
As to claim 13. The system of claim 12, wherein the statistics recorded comprise a data size, a total amount of execution time, a number of the virtualized computing resource units used, or an idle rate of each of the virtualized computing resource units merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
As to claim 14. The system of claim 12, wherein:
the data analysis job executed in the non-production workspace comprises training a machine learning model with training data having a first data size;
the data analysis job executed in the production workspace comprises training the machine learning model with training data having a second data size greater than the first data size; merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
the determining how the physical computing resources should be allocated is performed at least in part based on a ratio of the first data size and the second data size are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 15. The system of claim 12, wherein the operations further comprise receiving a time limit within which the data analysis job needs to be completed in the production workspace merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d), and wherein the determining how the physical computing resources should be allocated is performed at least in part based on the time limit are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 16. The system of claim 12, wherein the physical computing resources are configured to perform edge computing merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
As to claim 17. The system of claim 12, wherein:
the physical computing resources comprise a plurality of different Graphics Processing Unit (GPU) cards merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d); and
the determining how the physical computing resources should be allocated comprises:
dividing each of the GPU cards into a plurality of blocks; and
determining how each of the blocks should be allocated to execute the data analysis job in the production workspace are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 18. A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising:
accessing a first version of a machine learning job in a non-production environment, the first version of the machine learning job having a first data size, the non-production environment comprising a plurality of virtualized computing resource units, wherein each of the virtualized computing resource units provides a fraction of computing power provided by a physical computing device, the fraction being less than 1;
executing the first version of the machine learning job in the non-production environment via a subset of the virtualized computing resource units;
extracting data from an execution of the first version of the machine learning job in the non-production environment, wherein the data extracted comprises a total amount of execution time, which subset of the virtualized computing resource units were used in the execution, or a utilization rate of each of the virtualized computing resource units of the subset during the execution;
promoting, based on a satisfaction of a predetermined condition, the first version of the machine learning job to a second version of the machine learning job that is production-ready;
accessing the second version of the machine learning job in a production environment that comprises a plurality of the physical computing devices, the second version of the machine learning job having a second data size that is greater than the first data size;
determining, based on a difference between the first data size and the second data size and further based on the data extracted from the execution of the first version of the machine learning job in the non-production environment, how the plurality of the physical computing devices should be allocated to execute the second version of the machine learning job in the production environment; and
allocating the plurality of the physical computing resources based on the determining.
Step 2A:
Prong 1: the limitations of “wherein each of the virtualized computing resource units provides a fraction of computing power provided by a physical computing device, the fraction being less than 1”; “promoting, based on a satisfaction of a predetermined condition, the first version of the machine learning job to a second version of the machine learning job that is production-ready”; and “determining, based on a difference between the first data size and the second data size and further based on the data extracted from the execution of the first version of the machine learning job in the non-production environment, how the plurality of the physical computing devices should be allocated to execute the second version of the machine learning job in the production environment; and
allocating the plurality of the physical computing resources based on the determining” are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
Prong 2: the additional elements of “accessing a first version of a machine learning job in a non-production environment, the first version of the machine learning job having a first data size, the non-production environment comprising a plurality of virtualized computing resource units”; “extracting data from an execution of the first version of the machine learning job in the non-production environment, wherein the data extracted comprises a total amount of execution time, which subset of the virtualized computing resource units were used in the execution, or a utilization rate of each of the virtualized computing resource units of the subset during the execution”; and ”accessing the second version of the machine learning job in a production environment that comprises a plurality of the physical computing devices, the second version of the machine learning job having a second data size that is greater than the first data size” merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
The additional elements of “A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations”; “ executing the first version of the machine learning job in the non-production environment via a subset of the virtualized computing resource units” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea.
Thus, these additional elements do not integrate the judicial exception into a practical application.
Step 2B:
The additional elements of “accessing a first version of a machine learning job in a non-production environment, the first version of the machine learning job having a first data size, the non-production environment comprising a plurality of virtualized computing resource units”; “extracting data from an execution of the first version of the machine learning job in the non-production environment, wherein the data extracted comprises a total amount of execution time, which subset of the virtualized computing resource units were used in the execution, or a utilization rate of each of the virtualized computing resource units of the subset during the execution”; and ”accessing the second version of the machine learning job in a production environment that comprises a plurality of the physical computing devices, the second version of the machine learning job having a second data size that is greater than the first data size” merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d).
The additional elements of “A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations”; “ executing the first version of the machine learning job in the non-production environment via a subset of the virtualized computing resource units” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea.
Accordingly, the additional elements do not amount to significantly more than the abstract idea.
As to claim 19. The non-transitory machine-readable medium of claim 18, wherein the second version of the machine learning job has a specified time limit merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d), and wherein the determining is performed at least in part based on a ratio between the specified time limit and the total amount of execution time are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
As to claim 20. The non-transitory machine-readable medium of claim 18, wherein the determining is performed at least in part by maximizing a utilization rate of each of the physical computing devices that has been allocated to perform the execution of the second version of the machine learning job in the production environment re functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, opinion, thus it is reasonable to identify these limitation as reciting a mental process.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 – 3, 6, 8 – 10, 12, 14 – 15, 18 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett).
As to claim 1, Fawcett teaches a method, comprising:
accessing a first machine learning task through a research workspace (“..receives historical processing element resource allocation information for the stream processing job...” element S255 of figure 4 and associated text, especially para. 0071), the research workspace comprising a plurality of virtualized computing resource units (“..The processing elements of the stream processing job may be individually or collectively located on respective computing units (such as nodes 10) of cloud computing environment 50, whether physical or virtual. For purposes of this disclosure, the “computing units” (or “units of computing”) can be any computing construct capable of containing processing elements of stream processing jobs and having computing resources (such as CPU cores and memory) allocated to it for the processing of those stream processing jobs. In some embodiments, the computing units are virtual machines...” para. 0068. Note: access the virtual resources), the first machine learning task having a first data size (“...The historical information may also include workload characterization information, such as tuple size,...” para. 0073) and (“...dynamically adjusting amounts of compute resources for stream processing operators based on several performance factors; (ii) spreading and dynamically adjusting CPU resources allocated at a stream processing job level across cloud containers; (iii) using machine learning to optimally budget compute resources (e.g., CPU, memory) for each container in a containerized environment; (iv) using machine learning models to determine resource adjustments to containers based on current workflow; (v) as circumstances change, adjusting resource spread using machine learning models, and further allowing for budgeted resource amounts to be changed based on business considerations; (vi) adjusting/scaling resources themselves to allow processes to utilize additional power or space, or to reduce the size of the resources being utilized....” para. 0085);
executing the first machine learning task via a subset of the plurality of virtualized computing resource units (“...the execution units of the stream processing job,...” para. 0067) and (“..using specific units of computing (Kubernetes pods) to execute processing elements of stream processing jobs...” para. 0070) and (“Processing proceeds to operation S275 (see FIG. 4), where job execution mod 375 (see FIG. 5) executes the stream processing job using the allocated resources...” para. 0082);
associating the first machine learning task with the subset of the virtualized computing resource units used and an amount of execution time (“..and/or the output streams for each processing element over a given period of time....” para. 0073);
accessing a second machine learning task through a production workspace (“..For purposes of this disclosure, the “computing units” (or “units of computing”) can be any computing construct capable of containing processing elements of stream processing jobs and having computing resources (such as CPU cores and memory) allocated to it for the processing of those stream processing jobs....” para. 0068....” para. 0068. Note: stream processing jobs would comprise first and second jobs) and (“...execution of the stream processing job at operation S275....” para. 0072. Note: executing the stream job at operation is in production workspace) and (“...receives as output from the trained ML model a recommended allocation of resources for the processing elements of the stream processing job...” para. 0079. Note: the output is the second stream processing job or claim second machine learning task), the production workspace comprising a plurality of physical computing resource units (“..In other cases, ML mod 365 may select the historical information that results in the best score for a given set of circumstances, and use just the selected historical information to train the ML model. For example, the historical information may include sets of CPU core adjustments for each processing element, and ML mod 365 may select the CPU core adjustments that result in the best score for each of a given set of circumstances, such as given sets of tuple queue utilization rates...” para. 0076), the second machine learning task having a second data size greater than the first data size (“.For example, if tuple queue utilization of the processing elements was used in operation S270 to determine the initial resource allocation for the processing elements of the stream processing job, and the tuple queue utilization of the processing elements changed over a period of time, operation S280 provides the new tuple queue utilizations as input to the ML model, and the resulting resource allocations (or resource allocation adjustments) received as output from the ML model are applied to the currently executing stream processing job to ensure optimal performance throughout its processing...” para. 0082 - 0083) and (“...In this embodiment, qualitative evaluator 804 evaluates the quality of the processing of stream processing application 600 using an evaluation score. For example, the evaluation score may be based, at least in part, on: (i) how fast tuples are processed (output tuple rate vs. input tuple rate); (ii) CPU usage efficiency (CPU request values vs. actual usage), (iii) how balanced the tuple queue utilization (TQU) rates of the processing elements are..” para. 0108. Note: the workload of the recommended stream processing job is larger since the result of training processing rate is faster, so with the same time and resources, the system will process larger size of workload of tuples), wherein the second machine learning task and the first machine learning task have a same algorithm (“Adjustment delta generator 808, also depicted in FIG. 8, is configured to generate trial/training CPU core adjustments for stream processing application 600. In some cases, adjustment delta generator 808 uses a random number generator or other virtually randomized method to generate the adjustments...” para. 0110, 0112. Note: same delta is used and adjusted with different number as defined in the specification para. 0013); and
allocating, during an execution of the second machine learning task, a subset of the physical computing resource units to perform the execution of the second machine learning task, wherein the allocating is at least in part based on an association between the first machine learning task, the subset of the virtualized computing resource units used during an execution of the first machine learning task in the research workspace, and the amount of execution time during the execution of the first machine learning task in the research workspace (“...allocating to the processing elements a second subset of the set of computing resources based, at least in part, on an allocation determined using the trained machine learning model....” abstract and para. 0007. Note: the trained machine model is historical information as the first subset of resources) and (“.For example, if tuple queue utilization of the processing elements was used in operation S270 to determine the initial resource allocation for the processing elements of the stream processing job, and the tuple queue utilization of the processing elements changed over a period of time, operation S280 provides the new tuple queue utilizations as input to the ML model, and the resulting resource allocations (or resource allocation adjustments) received as output from the ML model are applied to the currently executing stream processing job to ensure optimal performance throughout its processing...” para. 0082 - 0083).
As to claim 2, Fawcett teaches The method of claim 1, wherein each virtualized computing resource units corresponds to a portion of a physical hardware processor or a portion of a physical electronic memory (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114).
As to claim 3, Fawcett teaches The method of claim 1, wherein the physical computing resource units comprise computing resources in a decentralized environment (“..When performing stream processing in distributed environments...” para. 0018).
As to claim 6, Fawcett teaches The method of claim 1, wherein the associating comprises recording, for the first machine learning task via an electronic table maintained within the research workspace, the subset of the virtualized computing resource units used and the amount of execution time for each individual virtualized computing resource unit (“..For example, the historical information may include how many CPU cores were allocated to each processing element, how much random access memory (RAM) was allocated to each processing element, a frequency of input tuples (or input tuple type) for each processing element, how many processor cycles it took for each processing element to process a given number of tuples, a size or utilization percentage of a tuple queue for each processing element, and/or the output streams for each processing element over a given period of time...” para. 0073).
As to claim 8, Fawcett teaches The method of claim 1, further comprising: before the accessing the second machine learning task through the production workspace, promoting the first machine learning task to be production-ready (“...For example, in some cases, resource allocation mod 370 retrieves setup parameters of the stream processing job, provides the setup parameters to the trained ML model, and then receives as output from the trained ML model a recommended allocation of resources for the processing elements of the stream processing job....” para. 0079).
As to claim 9, Fawcett teaches The method of claim 1, wherein the allocating is performed at least in part using a scheduler software program within the production workspace (“...scheduling rules” para. 0090).
As to claim 10, Fawcett teaches The method of claim 1, wherein the allocating is performed by extrapolating, based on a difference between the first data size and the second data size (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114) and further based on the subset of the virtualized computing resource units and the amount of execution time used during the execution of the first machine learning task in the research workspace, how much time or how much of the physical computing resource units are needed to complete the execution of the second machine learning task (“..(i) dynamically adjusting amounts of compute resources for stream processing operators based on several performance factors; (ii) spreading and dynamically adjusting CPU resources allocated at a stream processing job level across cloud containers; (iii) using machine learning to optimally budget compute resources (e.g., CPU, memory) for each container in a containerized environment; (iv) using machine learning models to determine resource adjustments to containers based on current workflow; (v) as circumstances change, adjusting resource spread using machine learning models, and further allowing for budgeted resource amounts to be changed based on business considerations; (vi) adjusting/scaling resources themselves to allow processes to utilize additional power or space, or to reduce the size of the resources being utilized; (vii) given a CPU budget for a stream processing job, dynamically adjusting the CPUs of containers of the job to improve overall tuple rates; and/or...” para. 0085).
As to claim 12, Fawcett teaches A system comprising:
a processor; and a non-transitory computer-readable medium having stored thereon instructions that are executable by the processor (“...The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor...” para. 0021) to cause the system to perform operations comprising:
receiving, via a non-production workspace, a data analysis job (“..receives historical processing element resource allocation information for the stream processing job...” element S255 of figure 4 and associated text, especially para. 0071);
executing the data analysis job in the non-production workspace via a plurality of virtualized computing resource units (“...the execution units of the stream processing job,...” para. 0067) and (“..The processing elements of the stream processing job may be individually or collectively located on respective computing units (such as nodes 10) of cloud computing environment 50, whether physical or virtual....” para./ 0068) and (“...In other cases, for example, where the allocations of operation S270 take place at a setup phase...” para. 0082), the virtualized computing resource units each providing a fraction of processing power offered by physical computing resources that are located outside the non-production workspace (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114);
recording statistics of an execution of the data analysis job in the non-production workspace (“...Processing proceeds to operation S265 (see FIG. 4), where ML mod 365 (see FIG. 5) trains an ML model using the historical information and the corresponding scores generated by scoring mod 360. For example, in some cases, ML mod 365 may train the ML model, via backpropagation, by using the historical information as training input and the corresponding scores as training output. In other cases, ML mod 365 may select the historical information that results in the best score for a given set of circumstances, and use just the selected historical information to train the ML model. For example, the historical information may include sets of CPU core adjustments for each processing element, and ML mod 365 may select the CPU core adjustments that result in the best score for each of a given set of circumstances, such as given sets of tuple queue utilization rates...” para. 0071 - 0076);
sending the data analysis job to a production workspace based on a determination that the data analysis job is production-ready (“...For example, in some cases, resource allocation mod 370 retrieves setup parameters of the stream processing job, provides the setup parameters to the trained ML model, and then receives as output from the trained ML model a recommended allocation of resources for the processing elements of the stream processing job....” para. 0079);
determining, based on the statistics recorded during the execution of the data analysis job in the non-production workspace, how the physical computing resources should be allocated to execute the data analysis job in the production workspace (“...the allocating of the resources to the processing elements includes multiple phases: (i) a first phase that allocates a first subset of resources based on respective minimum resource requirements of the processing elements, and (ii) a second phase that allocates a second subset of resources based on the allocation determined using the trained machine learning model...” para. 0080); and
allocating the physical computing resources based on the determining (“Processing proceeds to operation S275 (see FIG. 4), where job execution mod 375 (see FIG. 5) executes the stream processing job using the allocated resources. In some cases, when the stream processing job is already executing, “executing” in this context simply means executing the stream processing job under the new resource allocations determined in operation S270...” para. 0082).
As to claim 14, Fawcett teaches The system of claim 12, wherein:
the data analysis job executed in the non-production workspace comprises training a machine learning model with training data having a first data size (“...(i) dynamically adjusting amounts of compute resources for stream processing operators based on several performance factors; (ii) spreading and dynamically adjusting CPU resources allocated at a stream processing job level across cloud containers; (iii) using machine learning to optimally budget compute resources (e.g., CPU, memory) for each container in a containerized environment; (iv) using machine learning models to determine resource adjustments to containers based on current workflow; (v) as circumstances change, adjusting resource spread using machine learning models, and further allowing for budgeted resource amounts to be changed based on business considerations; (vi) adjusting/scaling resources themselves to allow processes to utilize additional power or space, or to reduce the size of the resources being utilized....” para. 0085);
the data analysis job executed in the production workspace comprises training the machine learning model with training data having a second data size greater than the first data size (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114) and (“..tuple queue utilization (TQU) rates of the processing elements...” para. 0108); and
the determining how the physical computing resources should be allocated is performed at least in part based on a ratio of the first data size and the second data size (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114) and (“..tuple queue utilization (TQU) rates of the processing elements...” para. 0108).
As to claim 15, Fawcett teaches The system of claim 12, wherein the operations further comprise receiving a time limit within which the data analysis job needs to be completed in the production workspace, and wherein the determining how the physical computing resources should be allocated is performed at least in part based on the time limit (“...how many processor cycles it took for each processing element to process a given number of tuples, a size or utilization percentage of a tuple queue for each processing element, and/or the output streams for each processing element over a given period of time...” para. 0073).
As to claim 18, Fawcett teaches A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising:
accessing a first version of a machine learning job in a non-production environment (“..receives historical processing element resource allocation information for the stream processing job...” element S255 of figure 4 and associated text, especially para. 0071), the first version of the machine learning job having a first data size (“...For example, the historical information may include how many CPU cores were allocated to each processing element, how much random access memory (RAM) was allocated to each processing element, a frequency of input tuples (or input tuple type) for each processing element, how many processor cycles it took for each processing element to process a given number of tuples, a size or utilization percentage of a tuple queue for each processing element...” para. 0073), the non-production environment comprising a plurality of virtualized computing resource units (“..The processing elements of the stream processing job may be individually or collectively located on respective computing units (such as nodes 10) of cloud computing environment 50, whether physical or virtual. For purposes of this disclosure, the “computing units” (or “units of computing”) can be any computing construct capable of containing processing elements of stream processing jobs and having computing resources (such as CPU cores and memory) allocated to it for the processing of those stream processing jobs. In some embodiments, the computing units are virtual machines...” para. 0068. Note: can access the virtual resources), wherein each of the virtualized computing resource units provides a fraction of computing power provided by a physical computing device, the fraction being less than 1 (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114);
executing the first version of the machine learning job in the non-production environment via a subset of the virtualized computing resource units (“...the execution units of the stream processing job,...” para. 0067) and (“..The processing elements of the stream processing job may be individually or collectively located on respective computing units (such as nodes 10) of cloud computing environment 50, whether physical or virtual....” para. 0068. Note: can execute in virtual) and (“...In other cases, for example, where the allocations of operation S270 take place at a setup phase...” para. 0082);
extracting data from an execution of the first version of the machine learning job in the non-production environment, wherein the data extracted comprises a total amount of execution time (“..and/or the output streams for each processing element over a given period of time....” para. 0073), which subset of the virtualized computing resource units were used in the execution (“..The processing elements of the stream processing job may be individually or collectively located on respective computing units (such as nodes 10) of cloud computing environment 50, whether physical or virtual....” para. 0068. Note: can execute in virtual) and (“...In other cases, for example, where the allocations of operation S270 take place at a setup phase...” para. 0082), or a utilization rate of each of the virtualized computing resource units of the subset during the execution;
promoting, based on a satisfaction of a predetermined condition, the first version of the machine learning job to a second version of the machine learning job that is production-ready (“...For example, in some cases, resource allocation mod 370 retrieves setup parameters of the stream processing job, provides the setup parameters to the trained ML model, and then receives as output from the trained ML model a recommended allocation of resources for the processing elements of the stream processing job....” para. 0079);
accessing the second version of the machine learning job in a production environment that comprises a plurality of the physical computing devices, the second version of the machine learning job having a second data size that is greater than the first data size (“..Model trainer 812 trains ML model 814 accordingly. As such, in the future, when the TQU rates for Pods P#1, P#2, P#3, and P#4 are close to 80%, 46%, 05%, and 18%, respectively....” para. 0114) and (“..tuple queue utilization (TQU) rates of the processing elements...” para. 0108);
determining, based on a difference between the first data size and the second data size and further based on the data extracted from the execution of the first version of the machine learning job in the non-production environment, how the plurality of the physical computing devices should be allocated to execute the second version of the machine learning job in the production environment (“...(i) dynamically adjusting amounts of compute resources for stream processing operators based on several performance factors; (ii) spreading and dynamically adjusting CPU resources allocated at a stream processing job level across cloud containers; (iii) using machine learning to optimally budget compute resources (e.g., CPU, memory) for each container in a containerized environment; (iv) using machine learning models to determine resource adjustments to containers based on current workflow; (v) as circumstances change, adjusting resource spread using machine learning models, and further allowing for budgeted resource amounts to be changed based on business considerations; (vi) adjusting/scaling resources themselves to allow processes to utilize additional power or space, or to reduce the size of the resources being utilized;...” para. 0085) and (“...the allocating of the resources to the processing elements includes multiple phases: (i) a first phase that allocates a first subset of resources based on respective minimum resource requirements of the processing elements, and (ii) a second phase that allocates a second subset of resources based on the allocation determined using the trained machine learning model...” para. 0080); and
allocating the plurality of the physical computing resources based on the determining (“Processing proceeds to operation S275 (see FIG. 4), where job execution mod 375 (see FIG. 5) executes the stream processing job using the allocated resources. In some cases, when the stream processing job is already executing, “executing” in this context simply means executing the stream processing job under the new resource allocations determined in operation S270...” para. 0082).
As to claim 20, Fawcett teaches The non-transitory machine-readable medium of claim 18, wherein the determining is performed at least in part by maximizing a utilization rate of each of the physical computing devices that has been allocated to perform the execution of the second version of the machine learning job in the production environment (“..qualitative evaluator 804 evaluates the quality of the processing of stream processing application 600 using an evaluation score. For example, the evaluation score may be based, at least in part, on: (i) how fast tuples are processed (output tuple rate vs. input tuple rate); (ii) CPU usage efficiency (CPU request values vs. actual usage), (iii) how balanced the tuple queue utilization (TQU) rates of the processing elements are...” para. 0108) and (“...Tuple queue utilization 710, which can generally be represented as the number of tuples in tuple queue 702 divided by the size of tuple queue 702, is an indicator of how effectively the tuples are being processed. For example, a tuple queue utilization 710 of zero would mean that the tuples are being processed very effectively...” para. 0106).
As to claim 13, Fawcett teaches The system of claim 12, wherein the statistics recorded comprise a data size (“...the historical processing element resource allocation information (also referred to simply as the “historical information”) includes information pertaining to historical allocations of resources to processing elements of the stream processing job, how the processing elements performed under those allocations, and the outputs produced by the stream processing job using those allocations. For example, the historical information may include how many CPU cores were allocated to each processing element, how much random access memory (RAM) was allocated to each processing element, a frequency of input tuples (or input tuple type) for each processing element, how many processor cycles it took for each processing element to process a given number of tuples, a size or utilization percentage of a tuple queue for each processing element...” para. 0073) a total amount of execution time, a number of the virtualized computing resource units used, or an idle rate of each of the virtualized computing resource units (examiner only needs to map one condition).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett) in view of Rincón et al., (US PUB 2021/0081838 hereinafter Rincón).
As to claim 4, Fawcett teaches The method of claim 1, wherein the first machine learning task is one of a plurality of machine learning tasks submitted to the research workspace, and wherein the method further comprises: Fawcett does not but Rincón teaches filtering out duplicative ones of the machine learning tasks before submitting a rest of the machine learning tasks including the first machine learning task to the research workspace (“..In one implementation, a first workflow 222 is an initial workflow that consolidates the corpus of data into a single location with any existing data, and a second workflow 224 is an incremental update workflow that eliminates duplicate entries within a plurality of entries comprising the corpus of data 214 and eliminates any entry having a timestamp older than a threshold date. The initial workflow can be conditioned on completion of the data extraction 212, and the incremental update workflow can be conditioned on completion of the initial workflow...” para. ) and (“...At 608, an incremental update workflow is applied to the merged corpus of data. In the illustrated method, the corpus of data comprises a plurality of entries, each having an associated time stamp, and the incremental workflow comprises a sequence of atomic functions selected from the library of atomic functions that can be executed to eliminate duplicate entries from the plurality of entries and eliminates any entry having a timestamp older than a threshold date...” para. 0044).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teaching of Rincón because Rincón would eliminate duplicate data to continuously update and provide accurate training data to the machine learning model (para. 0044).
Claims 5, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett) in view of XIA et al., (US PUB 2022/0091894 hereinafter XIA).
As to claim 5, Fawcett teaches The method of claim 1, wherein the allocating comprises:
dividing each of the physical computing resource units into a plurality of blocks (“..the stream processing systems may statically determine how many determine how many CPU cores to allocate to each pod (for example, if 4 pods and 10 CPU cores, the system would allocate 2.5 CPU cores to each pod...” para. 0092); and
allocating one or more blocks from the subset of the physical computing resource units for the execution of the second machine learning task (“..leverage machine learning to optimize job results by managing how CPU usage is dynamically spread across pods to serve each pod's unique processing demands...” para. 0098);
Fawcett does not but XIA teaches
and wherein the method further comprises monitoring, in the production workspace, which of the one or more blocks have been allocated and which other blocks of the plurality of blocks are idle (“... container may be longer than that of the first container, and the third container may be in the working state. The first container may correspond to a VGPU resource of a volume of 4 GB, and the second container may correspond to a VGPU resource of a volume of 2 GB. In some embodiments, the container management module 410 may select, according to an order (e.g., ascending order, descending order) of the sequence numbers of the two or more containers in the idle state (e.g., the first container and the second container)...” para. 0094) and (“..One or more containers in the idle state may be identified and designated as the target containers. Each of the one or more target containers may retrieve and process a target task from a message queue that includes at least one task. The target containers occupying specific volumes of the VGPU resource(s) may retrieve and process the at least one task in a message queue, thus enhancing a utilization rate of the VGPU resources and improving the efficiency of processing the at least one task.” Para. 0101).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teachings of XIA because XIA would identify status of the virtual graphical processing units (VGPUs) resources when they are idle to allocate and enhance utilization rate of the VGPU resources and improving the efficiency of processing tasks (para. 0101).
As to claim 7, Fawcett teaches The method of claim 1, Fawcett does not but XIA teaches wherein the associating further comprises associating the first machine learning task with an idle rate for each of the virtualized computing resource units in the subset (“... container may be longer than that of the first container, and the third container may be in the working state. The first container may correspond to a VGPU resource of a volume of 4 GB, and the second container may correspond to a VGPU resource of a volume of 2 GB. In some embodiments, the container management module 410 may select, according to an order (e.g., ascending order, descending order) of the sequence numbers of the two or more containers in the idle state (e.g., the first container and the second container)...” para. 0094) and (“..One or more containers in the idle state may be identified and designated as the target containers. Each of the one or more target containers may retrieve and process a target task from a message queue that includes at least one task. The target containers occupying specific volumes of the VGPU resource(s) may retrieve and process the at least one task in a message queue, thus enhancing a utilization rate of the VGPU resources and improving the efficiency of processing the at least one task.” Para. 0101).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teachings of XIA because XIA would identify status of the virtual graphical processing units (VGPUs) resources when they are idle to allocate and enhance utilization rate of the VGPU resources and improving the efficiency of processing tasks (para. 0101).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett) in view of Bahramshahry et al., (US PUB 2020/0026569 hereinafter Bahramshahry).
As to claim 11, Fawcett teaches The method of claim 10, Fawcett does not but Bahramshahry teaches wherein an amount of time needed to complete the execution of the second machine learning task is defined according to a Service-Level Agreement (SLA), and wherein the extrapolating further comprises calculating how much of the physical computing resource units are needed to complete the execution of the second machine learning task in order to meet the amount of time defined according to the SLA (“...therefore, as a deadline or a required time to complete the pending workload task approaches, in accordance with a QoS or SLA/SLT, etc.,...” para. 0054).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teachings of Bahramshahry because Bahramshahry’s SLA would make sure to complete tasks by defined deadline to optimize the system (para. 0554).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett) in view of ANANTHANARAYANAN et al., (US PUB 2022/0188569 hereinafter ANANTHANARAYANAN).
As to claim 16, Fawcett teaches The system of claim 12, wherein the physical computing resources are configured to perform edge computing (“..allocate the edge server's GPU resources among the retraining and inference jobs,...” para. 0038).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teachings of ANANTHANARAYANAN because ANANTHANARAYANAN teaches the same field of allocation computing resource training (title, abstract), and further the training on edge computing devices which is commonly used GPU resources known for parallel processing to provide fast services (para. 0038).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett) in view of Sudharsanan et al., (US PUB 2015/0212890 hereinafter Sudharsanan).
As to claim 17, Fawcett teaches The system of claim 12, wherein: Fawcett does not but Sudharsanan teaches
the physical computing resources comprise a plurality of different Graphics Processing Unit (GPU) cards (“...graphics card 110-1 and graphics card 110-2....” para. 0018); and
the determining how the physical computing resources should be allocated comprises:
dividing each of the GPU cards into a plurality of blocks (“...In other embodiments, each frame is divided into tiles such that respective portions rendered by graphics card 110-1 and graphics card 110-2 are interleaved in the final image....” para. 0018); and
determining how each of the blocks should be allocated to execute the data analysis job in the production workspace (“...In certain embodiments, the load is distributed by dividing the frames of a scene to be rendered. The division can be horizontal, creating top and bottom portions; the division can be vertical, creating left and right portions. In other embodiments, each frame is divided into tiles such that respective portions rendered by graphics card 110-1 and graphics card 110-2 are interleaved in the final image. In alternate embodiments, the processing load is distributed by allocating portions of the graphics rendering pipeline among the multiple graphics processing subsystems....” para. 0018).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teachings of Sudharsanan because dividing technique would allow different portions of a graphic card access and execute different resources (0018).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Fawcett et al., (US PUB 2022/0075664 hereinafter Fawcett) in view of Sivaraman et al., (US PUB 2021/0216375 hereinafter Sivaraman).
As to claim 19, Fawcett teaches The non-transitory machine-readable medium of claim 18, Fawcett does not but Sivaraman teaches wherein the second version of the machine learning job has a specified time limit (“...the output streams for each processing element over a given period of time...” para. 0073).
Fawcett does not but Sivaraman teaches wherein the determining is performed at least in part based on a ratio between the specified time limit and the total amount of execution time (“...Further, the different GPU scheduling policies can handle time slices differently as explained earlier. In other words, a fixed-share scheduling policy for a GPU 115 can have a different value for this ratio than a best-effort scheduling policy because many time-slices could be left idle with fixed-share scheduling as compared to best-effort scheduling. If a workload 118 is suspended and subsequently resumed, the clock cycles spent during the suspend and the resume operations can be included in the term, J.sub.r.sup.i. The time between completion of suspend and the start of resume is spent in the waiting queue and can be included in the term J.sub.w.sup.i...” para. 0024).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Fawcett by applying the teachings of Sivaraman because Sivaraman’s ratio would indicate execution time of each VGPU as processing element for particular job to compare and determine metric of each job and/or each VGPU (para. 0024).
Conclusion
The prior art made of record but not relied upon request is considered to be pertinent to applicant’s disclosure.
He, (US PUB 2021/0208951), discloses a method of sharing GPU card by virtualizing into multiple virtual GPUs (title, abstract and figures 1 – 6).
Yu, (US PUB 20220217792), discloses an edge computing server connected with the industrial 5G base station for training the neural network model (title, abstract and figures 1 – 3).
Liu, (US PUB 2022/0092439), discloses a dynamic inference deep learning framework on an edge computing platform which accepts any models from any frameworks to be deployed on any target devices (e.g., CPU, GPU, proprietary accelerators such as FPGA, ASICs, etc.), and dynamically changes the scheduling of the parallelism (title, abstract and figures 1 – 6).
Ti et al., (CH 202210538068.X), discloses method for allocating computing resources on machine tasks using training neural network models (title, pages 1 – 40).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG N HOANG whose telephone number is (571)272-3763. The examiner can normally be reached 9:5-30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHUONG N HOANG/Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194