DETAILED ACTION
This action is in response to the application filed 08/19/2022. Claims 1-20 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in the People’s Republic of China on 8/25/2021. It is noted, however, that applicant has not filed a certified copy of the App #CN202110981600.0 application as required by 37 CFR 1.55.
Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e).
Failure to provide a certified translation may result in no benefit being accorded for the non-English application.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 4/21/2023 and 9/25/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Specification
The abstract of the disclosure is objected to because it discloses “deploying a target mapping relation comprised in the target combination and the K network models on a prediction machine” in its last limitation. “comprised in” is improper grammar. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “deploying a target mapping relation comprised in the target combination and the K network models on a prediction machine” in its last limitation. This is grammatically incorrect, and it’s unclear what “comprised in” means in this context. As the limitation of the scope cannot be determined, this ambiguity renders the scope of the claim indefinite. This limitation is interpreted as deploying a mapping between the K network models and the target combination on a prediction machine. Similar reasoning applies to claims 13 and 19, which recite this limitation in their preambles, and substantially similar independent claim 14. This deficiency is inherited by dependent claims 2-13 and 15-19, as well as claim 20, which incorporates the method of claim 1 in its entirety.
Claim 10 recites “a second standard deviation” and “a second sum” in its first and second limitations, respectively. There’s no recitation of a first standard deviation or a first sum in either claim 10 or its parent claims. Thus, it’s unclear what these terms are relative to, and the scope of claim 10 is rendered indefinite. “a second standard deviation” is being interpreted as “a standard deviation”, and “a second sum” is being interpreted as “a sum”.
Claim 11 recites “obtaining a task operation accuracy of each first task executed on the assigned target network model” in its first limitation. It’s unclear whether “each first task” refers specifically to tasks present in the candidate combination, or every single task from the first task set. Additionally, there’s no antecedent basis for “the assigned target network model”. Thus, the scope of the claim is rendered indefinite. Similar reasoning applies to substantially similar claim 18. “each first task” is interpreted as referring to tasks present in the candidate combination, and not necessarily every task in the first task set.
Claim 12 recites “The method of claim 1, wherein obtaining the combination operation accuracy of the candidate combination based on the task operation accuracy of each first task in the candidate combination, comprises” in its preamble. Claim 1 does not have a limitation of “obtaining the combination operation accuracy of the candidate combination based on the task operation accuracy of each first task in the candidate combination”, nor does it contain any mention of task operation accuracy. Thus, there’s no antecedent basis for these terms in the claim, and the scope of the claim is rendered indefinite. Claim 12 is interpreted as depending on claim 11, not claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to non-statutory subject matter without significantly more.
Claim 1
Step 1: The claim recites “A multi-task deployment method”, and is therefore directed to the statutory category of process
Step 2A Prong 1: The claim recites the following judicial exception(s)
selecting a target combination with a maximum combination operation accuracy from the at least one candidate combination: This can be performed as a mental process. One can merely observe the target combination with maximum accuracy.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s)
obtaining N first tasks and K network models, wherein N and K are positive integers greater than or equal to 1: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
allocating the N first tasks to the K network models for operation, to obtain at least one candidate combination of tasks and network models, wherein each candidate combination comprises a mapping relation between the N first tasks and the K network models: This is mere instruction to obtain at least one candidate combination based on tasks and models in a generic manner (MPEP 2106.05(f)).
deploying a target mapping relation comprised m the target combination and the K network models on a prediction machine: This is mere instruction to deploy a target mapping relation on a generic computer (MPEP 2106.05(f)).
Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
obtaining N first tasks and K network models, wherein N and K are positive integers greater than or equal to 1: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
allocating the N first tasks to the K network models for operation, to obtain at least one candidate combination of tasks and network models, wherein each candidate combination comprises a mapping relation between the N first tasks and the K network models: This is mere instruction to obtain at least one candidate combination based on tasks and models in a generic manner (MPEP 2106.05(f)).
deploying a target mapping relation comprised m the target combination and the K network models on a prediction machine: This is mere instruction to deploy a target mapping relation on a generic computer (MPEP 2106.05(f)).
Claim 2
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites the following further judicial exception(s)
in response to the time consumption of the alternative combination satisfying a schedulable constraint parameter, determining the alternative combination as the candidate combination: This can be performed as a mental process. One can merely observe an alternative combination satisfying a schedulable constraint parameter.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
in response to the N first tasks being allocated, obtaining a time consumption of task execution of an alternative combination of the tasks and the network models obtained by allocating of the N first tasks: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
in response to the N first tasks being allocated, obtaining a time consumption of task execution of an alternative combination of the tasks and the network models obtained by allocating of the N first tasks: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
Claim 3
Step 1: The claim recites a process, as in claim 2
Step 2A Prong 1: The claim recites no further judicial exception(s)
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
in response to the time consumption of the alternative combination not satisfying the schedulable constraint parameter, discarding the alternative combination and obtaining a next alternative combination: This is a routine limitation for a neural architecture search and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
in response to the time consumption of the alternative combination not satisfying the schedulable constraint parameter, discarding the alternative combination and obtaining a next alternative combination: This is an instance of searching through candidate models based on a performance metric, a widely used technique and boilerplate function of neural architecture searches, as noted by Nagaraja (SYSTEM AND METHOD OF CREATING ARTIFICIAL INTELLIGENCE MODEL, MACHINE LEARNING MODEL OR QUANTUM MODEL GENERATION FRAMEWORK, filed 9/18/2020, US 20210334700 A1): “Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning … NAS finds an architecture from all possible architectures by following a search strategy that will maximize the performance and typically includes three dimensions a) a search space, b) a search strategy and c) a performance estimation. The search space is an architecture pattern that is typically designed by an NAS approach. The search strategy is something that depends upon the search methods used to define a NAS approach, for example a Bayesian optimization or a reinforcement learning. The search strategy accounts for the time taken to build a model. The performance estimation is the convergence of certain performance metrics expected out of a NAS produced neural architecture model” (Nagaraja, [0003]).
Claim 4
Step 1: The claim recites a process, as in claim 2
Step 2A Prong 1: The claim recites the following further judicial exception(s)
determining a total number of iterations based on N and K: This can be performed as a mental process. One can merely calculate the maximum number of combinations of models.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
in response to the total number of iterations being greater than an iteration number threshold, searching for a next alternative combination through a Particle Swarm Optimization (PSO) algorithm based on a combination operation accuracy of the alternative combination: This is mere instruction to find a combination with PSO based on combination operation accuracy and in response to an iteration threshold in a generic manner (MPEP 2106.05(f)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
in response to the total number of iterations being greater than an iteration number threshold, searching for a next alternative combination through a Particle Swarm Optimization (PSO) algorithm based on a combination operation accuracy of the alternative combination: This is mere instruction to find a combination with PSO based on combination operation accuracy and in response to an iteration threshold in a generic manner (MPEP 2106.05(f)).
Claim 5
Step 1: The claim recites a process, as in claim 2
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining the time consumption of the alternative combination based on the present WCET of each first task and a present task processing cycle: This can be performed as a mental process. One can merely sum the WCETs across all first asks and divide the total by the task processing cycle to obtain a relative time consumption of the alternative combination.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
obtaining a present Worst Case Execution Time (WCET) of each first task of the N first tasks in the alternative combination when the first task is executed on an assigned target network model: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
obtaining a present Worst Case Execution Time (WCET) of each first task of the N first tasks in the alternative combination when the first task is executed on an assigned target network model: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
Claim 6
Step 1: The claim recites a process, as in claim 5
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining a total WCET of the alternative combination based on the present WCET of each first task: This can be performed as a mental process. One can simply sum the present WCET across all first tasks for the alternative combination.
obtaining the time consumption of the alternative combination based on the total WCET of the alternative combination and the present task processing cycle: This can be performed as a mental process. One can merely divide the total WCET by the present task processing cycle to get a relative measure of time consumption for the alternative combination.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s)
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
Claim 7
Step 1: The claim recites a process, as in claim 6
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining an average WCET of the first task on the target network model based on the plurality of historical WCETs and the present WCET: This can be performed as a mental process. One can merely calculate an average of all the historical and present WCETs.
obtaining the total WCET of the alternative combination based on the average WCET of each first task: This can be performed as a mental process. One can merely assign the average WCET as the total WCET.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
for each first task, obtaining a plurality of historical WCETs of the target network model corresponding to the first task: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
for each first task, obtaining a plurality of historical WCETs of the target network model corresponding to the first task: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
Claim 8
Step 1: The claim recites a process, as in claim 7
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining a first standard deviation of the plurality of historical WCETs and the present WCET: This can be performed as a mental process. One can merely calculate the standard deviation of the historical and present WCETs.
obtaining a first sum value of the average WCET and the first standard deviation: This can be performed as a mental process. One can merely add the first sum to the average WCET and the standard deviation.
obtaining the total WCET of the alternative combination by summing the first sum value of each first task in the alternative combination: This can be performed as a mental process. One can merely add the first sum value across all first tasks for the alternative combination.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s)
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
Claim 9
Step 1: The claim recites a process, as in claim 6
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining an average task processing cycle based on the plurality of historical task processing cycles and the present task processing cycle: This can be performed as a mental process. One can merely calculate an average of the historical and present task processing cycle.
determining the time consumption of the alternative combination based on the total WCET and the average task processing cycle: This can be performed as a mental process. One can merely divide the total WCET by the average task processing cycle to determine a relative time consumption of the alternative combination.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
obtaining a plurality of historical task processing cycles: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
obtaining a plurality of historical task processing cycles: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
Claim 10
Step 1: The claim recites a process, as in claim 9
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining a second standard deviation of the plurality of historical task processing cycles and the present task processing cycle: This can be performed as a mental process. One can merely calculate the standard deviation of the historical and present task processing cycles.
obtaining a second sum value of the average task processing cycle and the second standard deviation: This can be performed as a mental process. One can merely sum the average task processing cycle value and the second standard deviation.
obtaining a ratio of the total WCET to the second sum value as the time consumption of the alternative combination: This can be performed as a mental process. One can merely divide the total WCET by the second sum value.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s)
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
Claim 11
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining a combination operation accuracy of the candidate combination based on the task operation accuracy of each first task in the candidate combination: This can be performed as a mental process. One can merely sum the task operation accuracies across all first tasks of the candidate combination.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
for each candidate combination, obtaining a task operation accuracy of each first task executed on the assigned target network model: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
for each candidate combination, obtaining a task operation accuracy of each first task executed on the assigned target network model: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
Claim 12
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites the following further judicial exception(s)
obtaining the combination operation accuracy of the candidate combination by weighting the task operation accuracy of each first task based on the weight of each first task: This can be performed as a mental process. One can merely calculate a weighted sum of task operation accuracies across all first asks.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
obtaining a weight of each first task: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
obtaining a weight of each first task: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
Claim 13
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites no further judicial exception(s)
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
in response to receiving a second task within a target task processing cycle, sorting second tasks to be processed within the target task processing cycle: This is mere instruction to sort tasks in a generic manner (MPEP 2106.05(f)).
querying the target mapping relation for the second tasks in sequence to obtain a target network model corresponding to the currently queried second task: This is mere instruction to obtain a target network model from a target mapping relation in a generic manner (MPEP 2106.05(f)).
issuing the currently queried second task to the target network model on the prediction machine for processing: This is mere instruction to issue a task to a model on a generic computer (MPEP 2106.05(f)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
in response to receiving a second task within a target task processing cycle, sorting second tasks to be processed within the target task processing cycle: This is mere instruction to sort tasks in a generic manner (MPEP 2106.05(f)).
querying the target mapping relation for the second tasks in sequence to obtain a target network model corresponding to the currently queried second task: This is mere instruction to obtain a target network model from a target mapping relation in a generic manner (MPEP 2106.05(f)).
issuing the currently queried second task to the target network model on the prediction machine for processing: This is mere instruction to issue a task to a model on a generic computer (MPEP 2106.05(f)).
Claim 14
Step 1: The claim recites “An electronic device”, and is therefore directed to the statutory category of article of manufacture
Step 2A Prong 1: The claim recites the following judicial exception(s)
selecting a target combination with a maximum combination operation accuracy from the at least one candidate combination: This can be performed as a mental process. One can merely observe the target combination with maximum accuracy.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s)
at least one processor; and a memory communicatively coupled to the at least one processor; wherein, the memory stores instructions executable by the at least one processor, when the instructions are executed by the at least one processor, the at least one processor is enabled to implement the following operations: This is mere instruction to execute the recited judicial exception(s) on generic computer hardware (MPEP 2106.05(f)).
obtaining N first tasks and K network models, wherein N and K are positive integers greater than or equal to 1: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
allocating the N first tasks to the K network models for operation, to obtain at least one candidate combination of tasks and network models, wherein each candidate combination comprises a mapping relation between the N first tasks and the K network models: This is mere instruction to obtain at least one candidate combination based on tasks and models in a generic manner (MPEP 2106.05(f)).
deploying a target mapping relation comprised m the target combination and the K network models on a prediction machine: This is mere instruction to deploy a target mapping relation on a generic computer (MPEP 2106.05(f)).
Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
at least one processor; and a memory communicatively coupled to the at least one processor; wherein, the memory stores instructions executable by the at least one processor, when the instructions are executed by the at least one processor, the at least one processor is enabled to implement the following operations: This is mere instruction to execute the recited judicial exception(s) on generic computer hardware (MPEP 2106.05(f)).
obtaining N first tasks and K network models, wherein N and K are positive integers greater than or equal to 1: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II.)
allocating the N first tasks to the K network models for operation, to obtain at least one candidate combination of tasks and network models, wherein each candidate combination comprises a mapping relation between the N first tasks and the K network models: This is mere instruction to obtain at least one candidate combination based on tasks and models in a generic manner (MPEP 2106.05(f)).
deploying a target mapping relation comprised m the target combination and the K network models on a prediction machine: This is mere instruction to deploy a target mapping relation on a generic computer (MPEP 2106.05(f)).
Claims 15-19
Step 1: Claims 15-19 recite an article of manufacture, as in claim 14.
Step 2A Prong 1: Claims 15-19 recite the same judicial exception(s) as claims 2, 4-5, 11, and 13, respectively.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through any additional elements. The analysis of claims 15-19 at this step mirrors that of claims 2, 4-5, 11, and 13, respectively, with the exception that claims 15-19 are directed to “at least one processor; and 4507547 a memory communicatively coupled to the at least one processor; wherein, the memory stores instructions executable by the at least one processor, when the instructions are executed by the at least one processor, the at least one processor is enabled to implement the following operations”, said operations mirroring those of claims 2, 4-5, 11, and 13. This is a mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)).
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s). The analysis of claims 15-19 at this step mirrors that of claims 2, 4-5, 11, and 13, with the exception that claims 15-19 are directed to “at least one processor; and 4507547 a memory communicatively coupled to the at least one processor; wherein, the memory stores instructions executable by the at least one processor, when the instructions are executed by the at least one processor, the at least one processor is enabled to implement the following operations”, said operations mirroring those of claims 2, 4-5, 11, and 13. This is mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)).
Claim 20
Step 1: The claim recites “A non-transitory computer-readable storage medium”, and is therefore directed to the statutory category of article of manufacture
Step 2A Prong 1: The claim recites the judicial exception(s) of claim 1
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s)
A non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to implement the method of claim 1: This is mere instruction to execute the recited judicial exception(s) with generic computer hardware (MPEP 2106.05(f)).
Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
A non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to implement the method of claim 1: This is mere instruction to execute the recited judicial exception(s) with generic computer hardware (MPEP 2106.05(f)).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 14-15, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tamai (INFORMATION PROCESSING APPARATUS, IDENTIFICATION SYSTEM, SETTING METHOD, AND PROGRAM, published 7/20/2019, US 2019/0188846 A1).
Regarding claim 1, Tamai discloses [a] multi-task deployment method, comprising:
obtaining N first tasks and K network models, wherein N and K are positive integers greater than or equal to 1:
“The management apparatus 100 is provided with a plurality of classifiers (K models) with at least different characteristics to inspect workpieces and selects from these classifiers a classifier to be executed by the image processing apparatus 200.” (Tamai, [0044])
“The algorithm of a machine learning model may include, but is not limited to, a support vector machine, logistic regression, and a neural network, for example.” (Tamai, [0064])
“the management apparatus 100 generates a plurality of test images (N tasks) to measure the identification accuracy and the execution time of each classifier for a specific object of inspection.” (Tamai, [0045])
“In an aspect, the test images may include a plurality of first images (first tasks), a correct answer about each of the first test images being that the first test image contains an object of detection” (Tamai, [0013])
allocating the N first tasks to the K network models for operation, to obtain at least one candidate combination of tasks and network models, wherein each candidate combination comprises a mapping relation between the N first tasks and the K network models:
“Then, the management apparatus 100 measures the identification accuracy and the execution time of each of the plurality of classifiers (models) by causing each classifier to identify the plurality of generated test images (tasks)” (Tamai, [0045]).
“Next, the management apparatus 100 selects, as the classifier to be executed by the image processing apparatus 200, a classifier whose execution time meets a predetermined condition from the classifiers whose measured identification accuracy falls within the allowable range of identification accuracy required by the user as the performance of the image processing apparatus 200.” (Tamai, [0046]). All classifiers whose execution times meet a predetermined condition and whose accuracy falls within the allowable range are candidates for selection, thus they are candidate combinations.
selecting a target combination with a maximum combination operation accuracy from the at least one candidate combination: “In an aspect, the selection unit may be configured to select the classifier (target combination) that has the highest identification accuracy (combination operation accuracy) if there is no classifier whose identification accuracy measured by the measurement unit falls under the first condition.” (Tamai, [0016])
deploying a target mapping relation comprised in the target combination and the K network models on a prediction machine:
“In an aspect, an identification system (prediction machine) may be provided that includes the foregoing information processing apparatus and the foregoing identification apparatus in communication with the information processing; in which the identification apparatus comprises: a receiving unit conjured to receive a classifier selected by the information processing apparatus from the information processing apparatus and storing the classifier in a storage unit; and an identifying unit configured to use a classifier stored in the storage unit to identify an object. According to an aspect, an identification system can be realized that uses a classifier selected by the information processing apparatus to identify an object.” (Tamai, [0019])
“According to an aspect, among the plurality of classifiers for inspecting objects whose measured identification accuracy falls within a predetermined allowable range, a classifier (target combination) whose execution time meets a predetermined condition can be selected to be executed by the identification apparatus (prediction machine)” (Tamai, [0021])
Examiner’s note: The “target mapping relation” in this case is the function that produces a single target combination
Regarding claim 2, the rejection of claim 1 in view of Tamai is incorporated. Tamai further discloses a method, wherein allocating the N first tasks to the K network models for operation, to obtain the at least one candidate combination of the tasks and the network models, comprises:
in response to the N first tasks being allocated, obtaining a time consumption of task execution of an alternative combination of the tasks and the network models obtained by allocating of the N first tasks: “the measurement unit may be configured to measure the identification accuracy and the execution time (time consumption) of each of the plurality of classifiers (alternative combination[s]) by causing each classifier to identify the test images” (Tamai, [0010]). The classifiers, before being filtered by accuracy and time consumption to obtain candidate combinations, are alternative combinations.
in response to the time consumption of the alternative combination satisfying a schedulable constraint parameter, determining the alternative combination as the candidate combination:
“the selection unit may be configured to select a classifier (candidate combination) whose execution time meets the second condition” (Tamai, [0013]).
“the classifier (alternative combination) whose execution time meets the second condition may be the classifier with the shortest execution time (schedulable constraint parameter)” (Tamai, [0018]). The execution time of the selected classifier places scheduling constraints on its usage.
Regarding claim 3, the rejection of claim 2 in view of Tamai is incorporated. Tamai discloses a method, further comprising: in response to the time consumption of the alternative combination not satisfying the schedulable constraint parameter, discarding the alternative combination and obtaining a next alternative combination: “the management apparatus 100 selects, as the classifier to be executed by the image processing apparatus 200, a classifier (alternative combination) whose execution time (time consumption) meets a predetermined condition (schedulable constraint parameter) from the classifiers whose measured identification accuracy falls within the allowable range of identification accuracy” (Tamai, [0046]). Alternative combinations whose execution times fall below the second condition are removed from selection consideration and are thus discarded, while the remaining next alternative combinations remain candidates for selection, pending their accuracy relative to the allowable range.
Regarding claim 14, Tamai discloses [a]n electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein, the memory stores instructions executable by the at least one processor, when the instructions are executed by the at least one processor, the at least one processor is enabled to implement the following operations: “The processor 101 implements the various functions of the management apparatus 100 by deploying a program (instruction codes) 103A stored in the storage device 103 on the memory 102 and executing the program.” (Tamai, [0050])
The following operations comprising:
obtaining N first tasks and K network models, wherein N and K are positive integers greater than or equal to 1:
“The management apparatus 100 is provided with a plurality of classifiers (K models) with at least different characteristics to inspect workpieces and selects from these classifiers a classifier to be executed by the image processing apparatus 200.” (Tamai, [0044])
“The algorithm of a machine learning model may include, but is not limited to, a support vector machine, logistic regression, and a neural network, for example.” (Tamai, [0064])
“the management apparatus 100 generates a plurality of test images (N tasks) to measure the identification accuracy and the execution time of each classifier for a specific object of inspection.” (Tamai, [0045])
“In an aspect, the test images may include a plurality of first images (first tasks), a correct answer about each of the first test images being that the first test image contains an object of detection” (Tamai, [0013])
allocating the N first tasks to the K network models for operation, to obtain at least one candidate combination of tasks and network models, wherein each candidate combination comprises a mapping relation between the N first tasks and the K network models:
“Then, the management apparatus 100 measures the identification accuracy and the execution time of each of the plurality of classifiers (models) by causing each classifier to identify the plurality of generated test images (tasks)” (Tamai, [0045]).
“Next, the management apparatus 100 selects, as the classifier to be executed by the image processing apparatus 200, a classifier whose execution time meets a predetermined condition from the classifiers whose measured identification accuracy falls within the allowable range of identification accuracy required by the user as the performance of the image processing apparatus 200.” (Tamai, [0046]). All classifiers whose execution times meet a predetermined condition and whose accuracy falls within the allowable range are candidates for selection, thus they are candidate combinations.
selecting a target combination with a maximum combination operation accuracy from the at least one candidate combination: “In an aspect, the selection unit may be configured to select the classifier (target combination) that has the highest identification accuracy (combination operation accuracy) if there is no classifier whose identification accuracy measured by the measurement unit falls under the first condition.” (Tamai, [0016])
deploying a target mapping relation comprised in the target combination and the K network models on a prediction machine:
“In an aspect, an identification system (prediction machine) may be provided that includes the foregoing information processing apparatus and the foregoing identification apparatus in communication with the information processing; in which the identification apparatus comprises: a receiving unit conjured to receive a classifier selected by the information processing apparatus from the information processing apparatus and storing the classifier in a storage unit; and an identifying unit configured to use a classifier stored in the storage unit to identify an object. According to an aspect, an identification system can be realized that uses a classifier selected by the information processing apparatus to identify an object.” (Tamai, [0019])
“According to an aspect, among the plurality of classifiers for inspecting objects whose measured identification accuracy falls within a predetermined allowable range, a classifier (target combination) whose execution time meets a predetermined condition can be selected to be executed by the identification apparatus (prediction machine)” (Tamai, [0021])
Examiner’s note: The “target mapping relation” in this case is the function that produces a single target combination
The analysis of claim 15 mirrors that of claim 2, with the exception that claim 15 is directed to generic computer hardware which executes the methods of claim 2. This generic hardware is taught by Tamai, as discussed regarding claim 14. Thus, claim 15 is rejected under the same rationales used for claim 2.
Regarding claim 20, the rejection of claim 1 in view of Tamai is incorporated. Tamai further discloses [a] non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to implement the method of claim 1: “A non-transitory computer-readable recording medium storing a program, when read and executed, causing a computer to perform operations” (Tamai, Claim 17).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 5-6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Tamai (INFORMATION PROCESSING APPARATUS, IDENTIFICATION SYSTEM, SETTING METHOD, AND PROGRAM, published 7/20/2019, US 2019/0188846 A1) in view of Rabinovich et al. (META-LEARNING FOR MULTI-TASK LEARNING FOR NEURAL NETWORKS, published 6/29/2021, US 11,048,978 B2), hereafter referred to as Rabinovich, and further in view of Jiang et al. (EXTENSIBLE SCHEDULING OF MESSAGES ON TIME-TRIGGERED BUSSES, published 10/26/2006, US 2006/0242252 A1), hereafter referred to as Jiang.
Regarding claim 5, the rejection of claim 2 in view of Tamai is incorporated. While Tamai fails to disclose the further limitations of the claim, Rabinovich discloses a method, wherein obtaining the time consumption of task execution of the alternative combination of the tasks and the network models obtained by allocating of the N first tasks comprises:
obtaining a present Worst Case Execution Time (WCET) of each first task of the N first tasks in the alternative combination when the first task is executed on an assigned target network model; and obtaining a time consumption of the alternative combination based on the present WCET of each first task and a present task processing cycle: “In various embodiments multi-task learning is performed in a form of weighed linear sum (cost of the alternative combination) of the loss functions
L
k
(present cost of each first task) for K different tasks with their associated weights
a
k
:
PNG
media_image1.png
131
334
media_image1.png
Greyscale
This example multi-task network is denoted a child network (network model) as an introduction to how to use a meta-network to learn such a child network in a meta-learning framework.” (Rabinovich, column 5, lines 44-46)
Rabinovich relates to neural architecture searches for multi-task learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamai to calculate the cost of a model by weighted summing of individual cost functions of different tasks on the model, as disclosed by Rabinovich. This method allows dynamic weights to be assigned to each task, which can be balanced to improve performance. See Rabinovich, column 1, lines 63-66 & column 8, lines 59-63.
While Rabinovich fails to disclose the further limitations of the claim, Jiang discloses obtaining a present Worst Case Execution Time (WCET) of each first task of the N first tasks:
“According to the invention, an objective function (loss) for schedulability and extensibility is defined that includes a peak usage request and an average processor usage request for each processor 12 and 14. Particularly, from the identified task execution time-windows, the algorithm derives a usage request function for each processor 12 and 14, and then computes the peak usage request and the average usage requests for each processor 12 and 14 at box 56. First, the usage request function for each task
t
i
j
,
f
i
j
t
:
0
,
T
→
R
is defined as:
PNG
media_image2.png
167
608
media_image2.png
Greyscale
Where …
C
i
j
is the worst case execution time of
t
i
j
” (Jiang, [0034])
“From the usage request function for each task 16, the usage request function for each processor
p
i
,
g
i
t
0
,
T
→
R
, is defined as:
PNG
media_image3.png
136
258
media_image3.png
Greyscale
” (Jiang, [0035])
“Then, the peak usage request for each processor 12, 14 is determined as:
PNG
media_image4.png
77
279
media_image4.png
Greyscale
” (Jiang, [0036])
Jiang relates to calculating worst-case execution times for task execution and is analogous to the claimed invention. The combination of Tamai and Rabinovich teaches a method of selecting alternative combinations of networks and tasks based on a time consumption metric for each combination. The claimed invention improves upon this method by using worst-case execution time as a metric for time consumption. Jiang teaches a method of measuring time consumption on each of a plurality of network-task combinations based on worst-case execution time, applicable to the combination of Tamai and Rabinovich. A person of ordinary skill in the art would have recognized that worst-case execution times serve as upper bounds on program performance, would lead to the predictable result of time consumptions being estimates of worst performance on a target system, and would improve the known device by enabling it to satisfy performance requirements of target real-time systems (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Regarding claim 6, the rejection of claim 5 in view of Tamai, Rabinovich, and Jiang is incorporated. Rabinovich, in combination with Jiang, discloses a method, wherein obtaining the time consumption of the alternative combination based on the present WCET of each first task and the present task processing cycle, comprises: obtaining a total WCET of the alternative combination based on the present WCET of each first task; and obtaining the time consumption of the alternative combination based on the total WCET of the alternative combination and the present task processing cycle: (Rabinovich) “In various embodiments multi-task learning is performed in a form of weighed linear sum (total WCET of the alternative combination) of the loss functions
L
k
(present WCET of each first task) for K different tasks with their associated weights
a
k
:
PNG
media_image1.png
131
334
media_image1.png
Greyscale
” (Rabinovich, column 5, lines 44-46). As noted in the rejection for claim 5, the combination of Rabinovich and Jiang substitutes the task loss
L
k
of Rabinovich for the WCET-based loss of Jiang.
Rabinovich relates to neural architecture searches for multi-task learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Tamai, Rabinovich, and Jiang to calculate the cost of a model by weighted summing of individual WCET costs of different tasks on the model, as disclosed by Rabinovich. This method allows dynamic weights to be assigned to each task, which can be balanced to improve performance. See Rabinovich, column 1, lines 63-66 & column 8, lines 59-63.
The analysis of claim 17 mirrors that of claim 5, with the exception that claim 17 is directed to generic computer hardware which executes the methods of claim 5. This generic hardware is taught by Tamai, as discussed regarding claim 14. Thus, claim 17 is rejected under the same rationales used for claim 5.
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Tamai (INFORMATION PROCESSING APPARATUS, IDENTIFICATION SYSTEM, SETTING METHOD, AND PROGRAM, published 7/20/2019, US 2019/0188846 A1) in view of Rabinovich et al. (META-LEARNING FOR MULTI-TASK LEARNING FOR NEURAL NETWORKS, published 6/29/2021, US 11,048,978 B2), hereafter referred to as Rabinovich, and further in view of Jiang et al. (EXTENSIBLE SCHEDULING OF MESSAGES ON TIME-TRIGGERED BUSSES, published 10/26/2006, US 2006/0242252 A1), hereafter referred to as Jiang, Meng et al. (Nonlinear approach for estimating WCET during programming phase, published 2016, Cluster Comput (2016) 19:1449–1459), hereafter referred to as Meng, and Wada et al. (Capillary For Optical Fiber, Ferrule For Optical Connector, And Optical-fiber-fixed Capillary, published 5/22/2003, US 20030095753 A1), hereafter referred to as ‘Wada’.
Regarding claim 7, the rejection of claim 6 in view of Tamai, Rabinovich, and Jiang is incorporated. While the aforementioned references fail to disclose the further limitations of claim 7, Meng discloses a method of for each first task, obtaining a plurality of historical WCETs corresponding to the first task: “The basic idea of NL-WCET is to predict WCET of a program A (first task) by exploiting amount of historical data from WCET analysis of other programs (including variations of the program A).” (Meng, page 1450, right column, paragraph 2).
Meng relates to WCET approximation and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to estimate WCET values from historical WCET data, as disclosed by Meng. Meng’s method has higher efficiency than conventional WCET analysis, making it useful for discovering timeline defects as soon as possible. See Meng, page 1449, left column, Abstract).
While Meng fails to disclose the further limitations of the claim, Wada discloses a method of obtaining an average WCET of the first task on the target network model based on the plurality of historical WCETs and the present WCET; and obtaining the total WCET of the alternative combination based on the average WCET of each first task: “To estimate the maximum value (total value) of connection loss, the value 3 time as much as the standard deviation
σ
is added to the average value of connection loss and the maximum connection loss is calculated in 99.7% probability” (Wada, [0063]). This method is applicable to sets of samples of statistical measures, such as a set of WCET values.
Wada relates to statistically approximating performance metrics and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to approximate a measure of WCET by adding three standard deviations to the average, as disclosed by Wada. This is a means of estimating a maximum value from a set of statistical samples that eliminates the highest 0.3% of outliers from consideration. See Wada, [0063]. As one of ordinary skill in the art would know, this is a standard technique known as a “three-sigma threshold”.
Regarding claim 8, the rejection of claim 7 in view of Tamai, Rabinovich, Jiang, Meng, and Wada is incorporated. Wada, in combination with Meng, discloses a method, wherein obtaining the total WCET of the alternative combination based on the average WCET of each first task, comprises: obtaining a first standard deviation of the plurality of historical WCETs and the present WCET; obtaining a first sum value of the average WCET and the first standard deviation: “To estimate the maximum value (total value) of connection loss, the value 3 time as much as the standard deviation
σ
is added (summed) to the average value of connection loss and the maximum connection loss is calculated in 99.7% probability” (Wada, [0063]). As described for the rejection of claim 7, Wada’s three-sigma threshold method is applicable to a set of samples of WCETs.
Wada relates to statistically approximating performance metrics and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to approximate a measure of WCET by adding three standard deviations to the average, as disclosed by Wada. This is a means of estimating a maximum value from a set of statistical samples that eliminates the highest 0.3% of outliers from consideration. See Wada, [0063].
Claims 11-12 & 18 are rejected under 35 U.S.C. 103 as being unpatentable over Tamai (INFORMATION PROCESSING APPARATUS, IDENTIFICATION SYSTEM, SETTING METHOD, AND PROGRAM, published 7/20/2019, US 2019/0188846 A1) in view of Rabinovich et al. (META-LEARNING FOR MULTI-TASK LEARNING FOR NEURAL NETWORKS, published 6/29/2021, US 11,048,978 B2), hereafter referred to as Rabinovich.
Regarding claim 11, the rejection of claim 1 in view of Tamai is incorporated. While Tamai fails to disclose the further limitations of the claim, Rabinovich discloses a method, wherein for each candidate combination, obtaining a task operation accuracy of each first task executed on the assigned target network model obtaining a combination operation accuracy of the candidate combination based on the task operation accuracy of each first task in the candidate combination:
“In various embodiments multi-task learning is performed in a form of weighed linear sum (combination operation loss of the candidate combination) of the loss functions
L
k
(task operation loss of each first task) for K different tasks with their associated weights
a
k
:
PNG
media_image1.png
131
334
media_image1.png
Greyscale
This example multi-task network is denoted a child network (assigned target network model) as an introduction to how to use a meta-network to learn such a child network in a meta-learning framework.” (Rabinovich, column 5, lines 44-46)
“A cross-entropy loss function was used for semantic segmentation to learn pixel-wise class probabilities and to report test set accuracies” (Rabinovich, column 7, lines 20-25). Cross-entropy loss, representing accuracy, can be substituted with the task loss
L
k
in the linear combination formula given above.
Rabinovich relates to neural architecture searches for multi-task learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamai to calculate the cost of a model by weighted summing of individual cost functions of different tasks on the model, as disclosed by Rabinovich. This method allows dynamic weights to be assigned to each task, which can be balanced to improve performance. See Rabinovich, column 1, lines 63-66 & column 8, lines 59-63.
Regarding claim 12, the rejection of claim 1 in view of Tamai is incorporated. While Tamai fails to disclose the further limitations of the claim, Rabinovich discloses a method which comprises: obtaining a weight of each first task; and obtaining the combination operation accuracy of the candidate combination by weighting the task operation accuracy of each first task based on the weight of each first task:
“In various embodiments multi-task learning is performed in a form of weighed linear sum (combination operation loss of the candidate combination) of the loss functions
L
k
(task operation loss of each first task) for K different tasks with their associated weights
a
k
(weight of each first task):
PNG
media_image1.png
131
334
media_image1.png
Greyscale
This example multi-task network is denoted a child network (assigned target network model) as an introduction to how to use a meta-network to learn such a child network in a meta-learning framework.” (Rabinovich, column 5, lines 44-46)
“A cross-entropy loss function was used for semantic segmentation to learn pixel-wise class probabilities and to report test set accuracies” (Rabinovich, column 7, lines 20-25). Cross-entropy loss, representing accuracy, can be substituted with the task loss
L
k
in the linear combination formula given above.
Rabinovich relates to neural architecture searches for multi-task learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamai to calculate the cost of a model by weighted summing of individual cost functions of different tasks on the model, as disclosed by Rabinovich. This method allows dynamic weights to be assigned to each task, which can be balanced to improve performance. See Rabinovich, column 1, lines 63-66 & column 8, lines 59-63.
The analysis of claim 18 mirrors that of claim 11, with the exception that claim 18 is directed to generic computer hardware which executes the methods of claim 11. This generic hardware is taught by Tamai, as discussed regarding claim 14. Thus, claim 18 is rejected under the same rationales used for claim 11.
Claims 13 & 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tamai (INFORMATION PROCESSING APPARATUS, IDENTIFICATION SYSTEM, SETTING METHOD, AND PROGRAM, published 7/20/2019, US 2019/0188846 A1) in view of Binns et al. (TASK SCHEDULING AND MESSAGE PASSING, published 5/20/2003, US 6,567,840 B1), hereafter referred to as Binns.
Regarding claim 13, the rejection of claim 1 in view of Tamai is incorporated. Tamai further discloses a method, wherein after deploying the target mapping relation comprised in the target combination and the K network models on the prediction machine, the method further comprises:
querying the target mapping relation for the second tasks in sequence to obtain a target network model corresponding to the currently queried second task:
Examiner’s note: All of these operations are performed within their own target[ed] task processing cycle[s].
“In an aspect, the test images may be further configured to include a plurality of second images (second task[s]), a correct answer about each of the second test images being that the second test image contains no object of detection;” (Tamai, [0014])
“the measurement unit may be configured to measure the second probability by causing each of the classifiers to identify the plurality of second test images; and the selection unit may be configured to select (querying the target mapping relation for the second tasks) a classifier (target network model) whose execution time meets the second condition from the classifiers whose second probability measured by the measurement unit falls under the first condition.” (Tamai, [0014]). The target mapping relation, which returns a best classifier from the set of candidates, is queried with the second image set.
“the selection unit 540 sorts the classifiers 501 whose measured defect identification rate and false positive rate are at or above the respective predetermined allowable ranges in ascending order of execution time (S104), and the selection unit 540 selects the classifier 501 with the shortest execution time as the classifier to be executed by the image processing apparatus 200 (S105).” (Tamai, [0104]). The target mapping relation is queried in sequence by evaluating classifiers in a particular sorted order.
issuing the currently queried second task to the target network model on the prediction machine for processing: “According to an aspect, among the plurality of classifiers for inspecting objects whose measured identification accuracy falls within a predetermined allowable range, a classifier (target network model) whose execution time meets a predetermined condition can be selected to be executed by the identification apparatus (prediction machine)” (Tamai, [0021])
While Tamai fails to disclose the further limitations of the claim, Binns discloses a method, wherein in response to receiving a second task within a target task processing cycle, sorting second tasks to be processed within the target task processing cycle: “The method still further includes defining a second list of the plurality of tasks (second tasks), wherein the second list of the plurality of tasks is sorted with the transformed task deadline as the primary key and wherein each transformed task deadline of a task having a first task criticality is less than any transformed task deadline of a task having a task criticality less than the first task criticality.” (Binns, column 2, lines 39-46). Each of the second tasks is sorted within its own target[ed] task processing cycle.
Binns relates to multitask processing and is analogous to the claimed invention. Tamai teaches a method of analyzing a plurality of classifiers, each executing a plurality of tasks. Binns teaches a method of sorting tasks. It would have been obvious to one of ordinary skill in the art to combine Tamai and Binns by sorting the pluralities of tasks in Tamai. This would achieve the predictable result of the tasks being organized in an ordered manner, with Tamai’s analysis and Binn’s sort performing the same together as they did separately. (MPEP 2143 I. (A) Combining prior art elements according to known methods to yield predictable results).
The analysis of claim 19 mirrors that of claim 13, with the exception that claim 19 is directed to generic computer hardware which executes the methods of claim 13. This generic hardware is taught by Tamai, as discussed regarding claim 14. Thus, claim 19 is rejected under the same rationales used for claim 13.
Allowable Subject Matter
Claims 4, 9, 10, and 16 would be allowable over the prior art of record if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action.
Regarding claim 4, “determining a total number of iterations based on N and K; and in response to the total number of iterations being greater than an iteration number threshold, searching for a next alternative combination through a Particle Swarm Optimization (PSO) algorithm based on a combination operation accuracy of the alternative combination” is not taught by the prior art of record. The closest prior arts of record are B. Wang et al. (Evolving Deep Neural Networks by Multi-objective Particle Swarm Optimization for Image Classification, published 4/22/2019, arXiv:1904.09035v2, retrieved from https://arxiv.org/abs/1904.09035), hereafter referred to as ‘B. Wang’, M. Wang et al. (DISTRIBUTED FAILOVER FOR MULTI-TENANT SERVER FARMS, published 5/5/2016, US 20160124818 A1), hereafter referred to as ‘M. Wang’, and La Lumondiere et al. (OPTIMIZED ILLUMINATION FOR IMAGING, published 5/1/2014, US 20140118561 A1), hereafter referred to as ‘La Lumondiere’.
B. Wang discloses…
Using particle swarm optimization (PSO) to perform a neural architecture search: “This paper proposes a novel multi-objective optimization method for evolving state-of-the-art deep CNNs in real-life applications, which automatically evolves the non-dominant solutions at the Pareto front. Three major contributions are made: Firstly, a new encoding strategy is designed to encode one of the best state-of-the-art CNNs; With the classification accuracy and the number of floating point operations as the two objectives, a multi-objective particle swarm optimization method is developed to evolve the non-dominant solutions” (B. Wang, page 1, left column, abstract)
… that optimizes models based on a combination accuracy of the alternative combinations: “The overall goal of this paper is to propose a multi-objective particle swarm optimization (MOPSO) method to balance the trade-off between the classification accuracy and the inference latency (time), which is named MOCNN.” (B. Wang, page 2, left column, paragraph 2)
B. Wang does not disclose determining a number of iterations based on a number of tasks and a number of models or using PSO conditionally based on an iteration threshold.
M. Wang discloses dynamically executing either an enumerative algorithm or a greedy algorithm to process server-client pairings based on the number of possible server-client combinations:
“In other implementations, additional or alternative algorithms may be used. For example, for a relatively small number of machine and small number of users, an enumeration algorithm may be used instead of the greedy algorithm. In a simplified example, for 5 servers and 3 users on each server, all possible backup combinations may be enumerated, and the one with the minimum failover time may be chosen. The number of combinations can be calculated in this way: for the 3 users in the first machine, each of them will choose a machine as backup, so the number of combinations is 81. The same calculation can be used for users in the other 4 machines. Thus, the total number of combinations is 405.” (M. Wang, [0090])
“For a large number of severs and a large number of users, e.g., 1000 servers and 50 users on each sever, the total combinations would be 1000*(999̂50)=9.512*E152, which is a huge number of combinations and is not able to be enumerated in an acceptable time. In this case, the greedy algorithm may be used” (M. Wang, [0091])
“In order to choose between the enumerated algorithm and the greedy algorithm, the equations used to calculate the total number of combinations may be utilized. Specifically, as long as the number of machine and the number of users are given, the total number of combinations can be calculated. If an available computer can enumerate, e.g., one million combinations in several seconds, a threshold may be configured to choose between the enumeration and greedy algorithms, based on an available/allowable time to execute the algorithms.” (M. Wang, [0092])
M. Wang does not disclose determining a number of iterations based on a number of tasks and a number of network models, using particle swarm optimization, or searching for a combination based on the accuracy of a combination.
La Lumondiere discloses executing particle swarm optimization in response to a number of iterations of a previous genetic algorithm being reached: “in some embodiments, a GSA algorithm may be and/or utilize a hybrid algorithm having aspects of a genetic algorithm and a particle swarm optimizer algorithm. Such hybrid algorithms may retain positive aspects of both genetic algorithms and particle swarm optimizer algorithms and may lead to efficiently determined optimal solutions regardless of problem structure. In one example embodiment, the computer 1208 may run a genetic algorithm for a threshold number of generations (iterations) and/or until a fitness function value or values reaches a predetermined level, at which point a particle swarm optimization algorithm is used until convergence is reached.” (La Lumondiere, [0068])
La Lumondiere does not disclose determining a total number of iterations based on a number of models and a number of tasks or executing PSO based on operation accuracy of alternative combinations.
Therefore, the prior art of record, individually or in combination, does not disclose the entirety of claim 4 as a whole. Substantially similar claim 16 is allowable under this rationale.
Regarding claim 9, the limitation “determining the time consumption of the alternative combination based on the total WCET and the average task processing cycle”, in combination with the other limitations of the claim, is not taught by the prior art of record. The closest prior arts of record are Bird et al. (ADAPTIVE RESOURCE USAGE LIMITS FOR WORKLOAD MANAGEMENT, published 6/19/2014, US 20140173616 A1), hereafter referred to as ‘Bird’, and Ajmera et al. (EFFICIENT MECHANISM FOR EXECUTING SOFTWARE-BASED SWITCHING PROGRAMS ON HETEROGENOUS MULTICORE PROCESSORS, PCT filed 12/24/2018, US 2022/0012108 A1), hereafter referred to as ‘Ajmera’.
Bird discloses converting application execution time into a proportionate processor utilization metric by dividing processor time by the total processor time available to the system, potentially using historical metrics collected over some number of time intervals: “The set of scheduling tasks performed each interval according to an embodiment of the present invention is illustrated in FIG. 8. At step 810, the scheduler thread collects metrics on the aggregate processor time or processor utilization (e.g., percent of processor capacity) used by all threads doing work on behalf of a particular application or workload since the last collection period. In the case where processor time is the only metric available, the thread will convert the processor time used by each individual workload into a processor utilization metric by dividing it by the total processor time available on the system over that collection interval. The exact calculation will be operating system and/or hardware dependent, and also dependent on whether the system is running under a virtualized environment. Alternatively, the processor utilization may be computed based on metrics collected over the last N intervals, rather than only the most recent interval, as a way to have the system adapt more smoothly to changes in workload. In such cases a moving average may be used to determine the processor utilization compared to the activation threshold discussed below.” (Bird, [0041])
Bird does not disclose measuring worst-case execution time as a metric for processor time.
Ajmera discloses calculating an average task processing cycle based on an obtained plurality of historical task processing cycles: “The dynamic thread assignment component 150 may then determine a value that is indicative of the number of active processing cycles used by that polling thread 145. The value may be, for example, an average number of active processing cycles per second used by the polling thread 145 over a period of time (T). In one embodiment, the value is a cumulative (historical) average that averages the measurements taken over several intervals.” (Ajmera, [0033]).
Ajmera does not disclose determining time consumption based on worst case execution time(s) and the average task processing cycle.
Therefore, the prior art of record, individually or in combination, does not disclose the entirety of claim 9 as a whole. Claim 10 would be allowable at least due to its dependence on claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Ramezani et al. (Task-Based System Load Balancing in Cloud Computing Using Particle Swarm Optimization, published 2014, Int J Parallel Prog (2014) 42:739–754) discloses a method of using particle swarm optimization to schedule tasks in a load-balancing model
Wilhelm et al. (The Worst-Case Execution Time Problem - Overview of Methods and Survey of Tools, published 5/8/2008, ACM Transactions on Embedded Computing Systems (TECS), Volume 7, Issue 3, pp. 1-53) discloses that WCETs serve as upper bounds on program performance, which are needed to show that performance requirements of a real-time system are being satisfied.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron P Gormley whose telephone number is (571)272-1372. The examiner can normally be reached Monday - Friday 12:00 PM - 8:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AG/Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148