Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This initial office action is based on the application filed on 04/26/2024, which claims 1-20 have been presented for examination.
Status of Claim
2. Claims 1-20 are pending in the application and have been examined below, of which, claims 1, 9 and 17 are presented in independent form.
Priority
3. No Information disclosure statement has been filed in this application.
Information Disclosure Statement
4. No information disclosure statement (IDS) has been filed in this application.
Examiner Notes
5. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
6. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis specific to Claims 1, 9 and 17 is being presented below.
Claims 1, 9 and 17:
Step 1 Analysis:
Claims 1-8 of the instant application are direct to process.
Claims 9-16 of the instant application are direct to apparatus.
Claims 17-20 of the instant application are direct to product.
Thus, they are statutory categories.
Step 2 Analysis:
Claims 1, 9 and 15 recites:
(a) identifying a list of a plurality of computing components associated with the computing product, wherein the list of the plurality of computing components includes, for each computing component, a plurality of features of the computing component;
(b) retrieving, from a storage device and for each feature of the computing component, a pre-defined weight associated with the feature;
(c) determining, based on the pre-defined weight associated with each feature of the computing component, i) a maximum pre-defined weight of the pre-defined weights associated with respective features for the computing component and ii) a minimum pre-defined weight of the pre-defined weights associated with respective features for the computing component;
(d) determining, based on the maximum pre-defined weight and the minimum pre-defined weight, a plurality of combinations of the features for the computing component;
(e) creating a plurality of configurations of the computing components based on each of the combinations of features of each of the computing components;
(f) determining a total weight of the configuration based on the pre-defined weights of the combination of features for each of the computing components of the configuration;
(g) updating, based on the total weight for the configuration, product specification data associated with the computing product.
Step 2A -- Prong 1:
The claims 1, 9 and 17 recites the limitations of:
(a) identifying a list of a plurality of computing components associated with the computing product, wherein the list of the plurality of computing components includes, for each computing component, a plurality of features of the computing component;
(c) determining, based on the pre-defined weight associated with each feature of the computing component, i) a maximum pre-defined weight of the pre-defined weights associated with respective features for the computing component and ii) a minimum pre-defined weight of the pre-defined weights associated with respective features for the computing component;
(d) determining, based on the maximum pre-defined weight and the minimum pre-defined weight, a plurality of combinations of the features for the computing component;
(e) creating a plurality of configurations of the computing components based on each of the combinations of features of each of the computing components;
(f) determining a total weight of the configuration based on the pre-defined weights of the combination of features for each of the computing components of the configuration;
Limitations (a) and (c)-(f) are limitations that, as drafted, are processes that, under its broadest reasonable interpretations, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “identifying”, “determining” can be performed in the human mind through observation, evaluation, judgement, opinion with the aid of pen and paper and “creating” can be performed in the human mind with pen and paper. As such, these limitations fall within the “Mental Processes” grouping of abstract ideas.
Step 2A -- Prong 2:
The claim 1 recites the additional limitations of “computing product”, “computing component”, “computing components” and “a storage device”. The limitations of “computing product”, “computing component”, “computing components” and “a storage device” are recited at a high level of generality, i.e., merely instructions to implement the abstract idea on a generic computer or merely uses a computer as a tool to perform the abstract idea. Claim 9 recites the additional limitations of “An information handling system”; “a processor”; “memory”; “computing components”; “computing component”, “computing product” and “a storage device”. The limitations of “An information handling system”; “a processor”; “memory”; “computing components”; “computing component”, “computing product” and “a storage device” is recited at a high level of generality, i.e., merely instructions to implement the abstract idea on a generic computer or merely uses a computer as a tool to perform the abstract idea. Claim 15 recites the additional limitations of “A non-transitory computer-readable medium”; “one or more computers”; “computing components”; “computing component”, “computing product” and “a storage device”. The limitations of “A non-transitory computer-readable medium”; “one or more computers”; “computing components”; “computing component”, “computing product” and “a storage device” is recited at a high level of generality, i.e., merely instructions to implement the abstract idea on a generic computer or merely uses a computer as a tool to perform the abstract idea. Additionally, limitations (b) and (g) perform as well-understood, routine and conventional activity. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B:
As explained with respect to Step 2A Prong Two, the additional elements in the claim are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The same analysis applies here in 2B, i.e., simply adding extra-solution activity or well-understood, routine and conventional activity or generic computer components does not integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B since the courts have identified functions such as gathering, displaying, updating, retrieving (transmitting/receiving) and storing data as well-understood, routine, conventional activity. See MPEP 2106.05(d) and See MPEP 2106.05(g) . Therefore, claims are ineligible.
Dependent claims
Additionally, claims 2, 10 and 18 recite “identifying one or more physical constraints of the computing product; and creating the plurality of configurations of the computing components based on i) each of the combinations of features of each of the computing components and ii) the physical constraints of the computing product” as drafted, is a process that, under its broadest reasonable interpretations, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “identifying” can be performed in the human mind through observation, evaluation, judgment, opinion with the aid of pen and paper and “creating” can be performed in the human mind with pen and paper. As such, this limitation falls within the “Mental Processes” grouping of abstract idea. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 2, 10 and 18 are ineligible.
Additionally, claims 3, 11 and 19 recite “wherein creating the plurality of configurations of the computing components includes: iteratively, for each configuration: creating the configuration based on the combination of features of each of the computing components; identifying the physical constraints associated with the computing components of the configuration” as drafted, is a process that, under its broadest reasonable interpretations, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “identifying” can be performed in the human mind through observation, evaluation, judgment, opinion with the aid of pen and paper and “creating” can be performed in human mind with pen and paper. As such, this limitation falls within the “Mental Processes” grouping of abstract idea. The addition limitation “and updating the configuration based on the physical constraints associated with the computing components of the configuration” which perform as well-understood, routine and conventional activity. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 3, 11 and 19 are ineligible.
Additionally, claims 4, 12 and 20 recites “wherein creating the plurality of configurations is performed by a machine learning model” is merely insignificant extra solution activity of comparation/judgment data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 4, 12 and 20 are ineligible.
Additionally, claim 5 recites “wherein the machine learning model is a recurrent neural network model” is merely insignificant extra solution activity of defining data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claim 5 is ineligible.
Additionally, claim 6 recites “wherein determining the plurality of combinations of the features for the computing component includes: determining, for each combination of features, a weight of the combination of features; and determining, for each combination of features, that the weight of the combination of features is between the maximum pre-defined weight and the minimum pre-defined weight” as drafted, is a process that, under its broadest reasonable interpretations, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “determining” can be performed in the human mind through observation, evaluation, judgment, opinion with the aid of pen and paper. As such, this limitation falls within the “Mental Processes” grouping of abstract idea. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claim 6 is ineligible.
Additionally, claim 7 recites “wherein creating the plurality of configurations of the computing components includes creating all possible configurations of the computing components based on each of the combinations of features of each of the computing components” That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “creating” can be performed in the human mind with the aid of pen and paper. As such, this limitation falls within the “Mental Processes” grouping of abstract idea. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claim 7 is ineligible.
Additionally, claim 8 recites “determining, based on the total weight of each of the configurations, a largest weight of the total weights of each configuration of all possible configurations” as drafted, is a process that, under its broadest reasonable interpretations, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “determining” can be performed in the human mind through observation, evaluation, judgment, opinion with the aid of pen and paper. As such, this limitation falls within the “Mental Processes” grouping of abstract idea. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claim 6 is ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claim(s) 1-4, 6-12 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jejda et al. (US Patent No. 12,093,806 B1 – herein after Jejda) in view of Wu et al. (US Pub. No. 2022/0374765 A1 – herein after Wu).
Regarding claim 1.
Jejda discloses
A computer-implemented method of determining a configuration of a computing product (determine automatically which partitioning scheme to apply – See col. 6, lines 42-65. Examiner respectfully notes that scheme is as configuration), the method including:
identifying a list of a plurality of computing components associated with the computing product (the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request – See col. 8, lines 1-5 and Fig. 2 and Fig. 9, list of inference accelerators and processors associated with the computer system), wherein the list of the plurality of computing components includes (a number of available processing units (e.g., 40 MB divided by 8 processing units may result in determining subgraph partitions that use approximately 5 MB of weight to be assigned to each processing unit – See col. 9, lines 14-17),
for each computing component of the plurality of computing components (different processing units – See col. 10, lines 13-19. Each processing unit – See col. 10, lines 45-54):
retrieving, from a storage device (storage device – See col. 14, lines 3-15) and for each feature of the computing component, a pre-defined weight associated with the feature (access or receive the number and/or configuration of tensor processing units – see col. 7, lines 7-9. Different features of the operations in a subgraph may be considered, such as the number of weights used in the operations (as discussed below with regard to FIGS. 5, 7, and 8)… weight or operation number thresholds may be specified as a percentage of dedicated cache in order to allow the assignment of subgraphs to processing units to be altered from one neural network to another – See col. 8, lines 8-26. Tensor processing unit features – col. 7, lines 20-24);
determining, based on the pre-defined weight associated with each feature of the computing component (different features of the operations in a subgraph may be considered, such as the number of weights used in the operations – See col. 4, lines 2-5. Subgraph partitioning may determine features, such as expected or predicted execution time of operations, size or number of weights, number of operations, or other measure of subgraphs for evaluation with respect to partitioning schemes – See col. 6, lines 46-51),[[ i) a maximum pre-defined weight of the pre-defined weights associated with respective features for the computing component and ii) a minimum pre-defined weight of the pre-defined weights associated with respective features for the computing component;
determining, based on the maximum pre-defined weight and the minimum pre-defined weight, a plurality of combinations of the features for the computing component;]]
creating a plurality of configurations of the computing components based on each of the combinations of features of each of the computing components (boundaries between various components and operations are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the exemplary configurations may be implemented as a combined structure or component– See col. 13, lines 52-67. Different configuration components may be implemented – See col. 4, lines 31-39);
for each configuration of the plurality of configurations (different numbers of components or configuration of components may be implemented – See col. 4, lines 35-36. Determine from the received neural network (e.g., a framework or request parameter specified by a programmer/client/user) the configuration, capabilities, or number of tensor processing units to consider when partitioning, or may automatically determine the configuration, capabilities, or number of tensor processing units to consider – See col. 6, lines 56-62):
determining a total weight of the configuration based on the pre-defined weights of the combination of features for each of the computing components of the configuration (the weight subgraph threshold may estimate (or determine) other cache storage needs when evaluating the size of weight values (e.g., IFMAPs determined according to a scheduler) so that the total of the size of weight values and other cache storage needs does not exceed the capacity of the cache (e.g., size of weights+IFMAPs>12 MB) – See col. 9, line 67 and col. 10, lines 1-6); and
updating, based on the total weight for the configuration (the weight subgraph threshold may estimate (or determine) other cache storage needs when evaluating the size of weight values (e.g., IFMAPs determined according to a scheduler) so that the total of the size of weight values and other cache storage needs does not exceed the capacity of the cache (e.g., size of weights+IFMAPs>12 MB) – See col. 9, line 67 and col. 10, lines 1-6), product specification data associated with the computing product (subgraph partitioning may determine features, such as expected or predicted execution time of operations, size or number of weights, number of operations, or other measure of subgraphs for evaluation with respect to partitioning schemes. Subgraph partitioning 318 may include the instructions or update parameters or other state for the compilation of a neural network to include instructions that assign the subgraphs to different tensor processing units 270 and thus statically allocate the corresponding weights of the assigned subgraphs– See col 6, lines 46-62).
Zejda does not disclose
i) a maximum pre-defined weight of the pre-defined weights associated with respective features for the computing component and ii) a minimum pre-defined weight of the pre-defined weights associated with respective features for the computing component;
determining, based on the maximum pre-defined weight and the minimum pre-defined weight, a plurality of combinations of the features for the computing component;
Wu discloses
determining, based on the pre-defined weight associated with each feature of the computing component (determination of a weight associated with each of subset of features based on the dimension model, identification of a predetermined number of features associated with the highest weights, and generation, for each dimension reduction model, of a data structure comprising the predetermined number of features and the weight associated with each of the predetermined number of features – See Abstract), i) a maximum pre-defined weight of the pre-defined weights associated with respective features for the computing component (a predetermined number (i.e., n.sub.top) of most-important (i.e., highest-weighted) features are selected and stored at S750 along with their associated weights – See paragraph [0040]) and ii) a minimum pre-defined weight of the pre-defined weights associated with respective features for the computing component (the features are sorted based on the determined weights at S740, with those features associated with higher percentages being listed higher than features associated with lesser/minimum percentages – See paragraph [0040]);
determining, based on the maximum pre-defined weight and the minimum pre-defined weight, a plurality of combinations of the features for the computing component (FIG. 8 is a tabular example of [n.sub.top×n.sub.repeat] data structure 800 output at S770 according to some embodiments. According to data structure 800, n.sub.top=3 and n.sub.repeat=4. Each column of data structure 800 corresponds to the n.sub.top most-important features determined for iteration I and each row n represents the n-th most-important feature of each iteration I. – See Fig. 3. At S910, for every feature which appears in data structure 800, all weights attributed to that feature are summed. For example, data structure 800 associates feature F.sub.1 with weights 65%, 40% and 58%. Accordingly, a total weight of 163% is determined at S910 for feature F.sub.1 – See paragraphs [0040-0044]. Such logic may be constrained by predefined maximum and/or minimum numbers of selected features – See paragraph [0048]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Wu’s teaching into Zejda’s invention because incorporating Wu’s teaching would enhance Zejda to enable to indicate features that are selected and constrained by predefined maximum and/or minimum numbers of selected features as suggested by Wu (paragraph [0048]).
Regarding claim 2, the computer-implemented method of claim 1, further including:
Zejda discloses
identifying one or more physical constraints of the computing product (partitions into subgraphs may be restricted to contiguous subgraphs, whereas in other partitioning schemes – See col. 8, lines 15-18); and
creating the plurality of configurations of the computing components (different partitioning schemes may be applied to identify different subgraphs of the neural network 110, such as subgraphs 116a, 116b, and 116n – See col. 3, line 67 and col. 4, lines 1-2) based on i) each of the combinations of features of each of the computing components (partitioning schemes that combine multiple or subgraphs (including non-contiguous subgraphs) assigned to a processing unit, may also achieve high performance and utilization of processing units (with potentially less performant compilation times) – See col. 9, lines 21-27) and ii) the physical constraints of the computing product (partitions into subgraphs 116 may be restricted to contiguous subgraphs, whereas in other partitioning schemes (as discussed below with regard to FIG. 8) non-contiguous sub-graphs may be used – See col. 4, lines 10-14).
Regarding claim 3, the computer-implemented method of claim 2,
Zejda discloses
wherein creating the plurality of configurations of the computing components includes (more subgraphs than processing units can be created in order to provide many possible, fine-grained combinations of assigning subgraphs to better balance workload (e.g., ops and/or time) with cache utilization for weights. In assignments made according to non-contiguous partitioning schemes, each processing unit can execute multiple programs – See col. 13, lines 13-20):
iteratively, for each configuration (subgraph partitioning scheme to apply or may determine automatically which partitioning scheme to apply – See col. 6, lines 62-65):
creating the configuration based on the combination of features of each of the computing components (partitioning schemes that combine multiple or subgraphs (including non-contiguous subgraphs) assigned to a processing unit, may also achieve high performance and utilization of processing units (with potentially less performant compilation times) – See col. 9, lines 21-25. Different partitioning schemes may be applied in different scenarios for determining and assigning subgraphs to different processing units. Some partitioning schemes, such as weight-based partitioning, may offer fast compilation speeds suitable for Just In Time (JIT) compilation or fast prototyping of a neural network to be performed on a new hardware design – See col. 9, lines 5-27);
identifying the physical constraints associated with the computing components of the configuration (one feature considered as part of a partitioning scheme. In some embodiments, the size of weight values may be determined according to the type of operator. In some embodiments, weight-balanced partitioning may divide a total weight size of a neural network into a number of available processing units (e.g., 40 MB divided by 8 processing units may result in determining subgraph partitions that use approximately 5 MB of weight to be assigned to each processing unit – See col. 9, lines 5-45); and
updating the configuration based on the physical constraints associated with the computing components of the configuration (the partitioning scheme may assign partitions to processing units with different capabilities (e.g., in terms of processing or cache storage capacity). Thus, the assignment of a large subgraph with large weight size may be assigned to a correspondingly large processing unit capable of performing the assignment – See col. 8, lines 27-42).
Regarding claim 4, the computer-implemented method of claim 3,
Zejda discloses
wherein creating the plurality of configurations is performed by a machine learning model (neural network models for implementing machine learning inference may include a graph that describes the computational dataflow between various operations and weight values (“weights”) that are applied as part of the operations …. a neural network model trained with 16 bit weights may use 40 MB of weight – See col. 3, lines 1-16).
Regarding claim 6, the computer-implemented method of claim 1,
Zejda discloses
wherein determining the plurality of combinations of the features for the computing component includes (partitioning schemes, such as partitioning schemes that combine multiple or subgraphs (including non-contiguous subgraphs) assigned to a processing unit, may also achieve high performance and utilization of processing units (with potentially less performant compilation times), and thus may be suitable for neural networks with unbalanced computation and data workloads – see col. 9, lines 20-27):
determining, for each combination of features, a weight of the combination of features (the size of weight values may be determined according to the type of operator. In some embodiments, weight-balanced partitioning may divide a total weight size of a neural network into a number of available processing units (e.g., 40 MB divided by 8 processing units may result in determining subgraph partitions that use approximately 5 MB of weight to be assigned to each processing unit – See col. 9, lines 30-37); and
Zejda does not disclose
determining, for each combination of features, that the weight of the combination of features is between the maximum pre-defined weight and the minimum pre-defined weight.
Wu discloses
determining, for each combination of features, that the weight of the combination of features is between the maximum pre-defined weight and the minimum pre-defined weight (M may be a pre-defined number or may be based on the distribution of occurrences/average weights determined for the features. For example, if five features are associated with very high numbers of occurrences/average weights relative to the remaining features, this distribution may indicate that these five features should be selected at S940/S960. Such logic may be constrained by predefined maximum and/or minimum numbers of selected features, or any other suitable rules – See paragraph [0048]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Wu’s teaching into Zejda’s invention because incorporating Wu’s teaching would enhance Zejda’s to enable to determine the features that are associated with high numbers of weights relative to the other features as suggested by Wu (paragraph [0048]).
Regarding claim 7, the computer-implemented method of claim 1,
Zejda discloses
wherein creating the plurality of configurations of the computing components includes creating all possible configurations of the computing components based on each of the combinations of features of each of the computing components (the different subgraphs may be assigned to different processing units according to the partitioning scheme as part of generating instructions to execute the neural network across the processing units – See col. 8, lines 27-30. More subgraphs than processing units can be created in order to provide many possible, fine-grained combinations of assigning subgraphs to better balance workload (e.g., ops and/or time) with cache utilization for weights – See col. 13, lines 13-28).
Regarding claim 8, the computer-implemented method of claim 7, further including:
Zejda discloses
determining, based on the total weight of each of the configurations, a largest weight of the total weights of each configuration of all possible configurations (weight-balanced partitioning may divide a total weight size of a neural network into a number of available processing units (e.g., 40 MB divided by 8 processing units may result in determining subgraph partitions that use approximately 5 MB of weight to be assigned to each processing unit – see col. 9, lines 28-37. Possible contiguous and non-contiguous groupings of the subgraphs may be evaluated assigned to different processing units to balance feature(s) of the groups (e.g., weight, operations, and/or expected time) across the processing units). Given the amount of storage used to hold large numbers of weights for application in the neural network, a dedicated cache for single processing unit, such as an L2 cache for single CPU core, GPU, or hardware accelerator like inference accelerator 220 discussed below in FIG. 2, may be unable to store all of the weights used to execute a neural network – See col. 3, lines 17-23. The operations of the layers of the neural network may be divided into subgraphs according to a partitioning scheme, in some embodiments. For example, different features of the operations in a subgraph may be considered, such as the number of weights used in the operations (as discussed below with regard to FIGS. 5, 7, and 8), the number of operations in a subgraph (as discussed below with regard to FIGS. 6 and 8), the expected/predicted execution time for operations in a subgraph (e.g., as discussed below with regard to FIGS. 7 and 8), among other partitioning schemes – See col. 8, lines 6-16).
Regarding claim 9.
An information handling system comprising a processor having access to memory media storing instructions executable by the processor to perform operations, comprising:
Regarding claim 9, recites the same limitations as rejected claim 1 above.
Regarding claim 10, recites the same limitations as rejected claim 2 above.
Regarding claim 11, recites the same limitations as rejected claim 3 above.
Regarding claim 12, recites the same limitations as rejected claim 4 above.
Regarding claim 14, recites the same limitations as rejected claim 6 above.
Regarding claim 15, recites the same limitations as rejected claim 7 above.
Regarding claim 16, recites the same limitations as rejected claim 8 above.
Regarding claim 17.
A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising:
Regarding claim 17, recites the same limitations as rejected claim 1 above.
Regarding claim 18, recites the same limitations as rejected claim 2 above.
Regarding claim 19, recites the same limitations as rejected claim 3 above.
Regarding claim 20, recites the same limitations as rejected claim 4 above.
8. Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zejda and Wu as applied to claims 4 and 12 respectively above, and further in view of Bikumala et al. (US Pub. No. 2021/0241337 A1 – herein after Bikumala).
Regarding claim 5, the computer-implemented method of claim 4,
Bikumala discloses
wherein the machine learning model is a recurrent neural network model (Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections and can not only process single data points, but also entire sequences of data – See paragraph [0052]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Bikumala’s teaching into Zejda’s and Wu’s inventions because incorporating Bikumala’s teaching would enhance Zejda and Wu to enable to perform deep learning to predict features as suggested by Bikumada (paragraph [0047]).
Regarding claim 13, recites the same limitations as rejected claim 5 above.
Conclusion
9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tang et al. (US Pub. No. 2022/0004959 A1) discloses determining a plurality of feature weight vectors, wherein each of the plurality of feature weight vectors comprises a plurality of feature weights corresponding to a plurality of features for prefiltering rider-driver pairs – See Abstract and specification for more details.
Dande et al. (US Pub. No. 2024/0144076 A1) discloses determining, via a machine learning (ML) subsystem, a centralized governance dataset based upon the first component metadata and the second component metadata. The method further includes generating a representation of the centralized governance dataset that is accessible by one or more of the first distributed computing component or the second distributed computing component – See Abstract and specification for more details.
Sid et al. (US Pub. No. 2023/0385261 A1) discloses training an index filter model to identify signals in the labeled training data, the signals being indicative of a potential performance improvement associated with using an index configuration for a given query; training the index filter model to learn rules over the signals for identifying spurious indexes; and storing the index filter model in a memory – See Abstract and specification for more details.
Schibler et al. (US Pub. No. 2019/0312800 A1) discloses performance: optimize performance only. This is achieved by using a fixed-cost cost model with either of the above scoring functions, or by setting the cost weight w2 to 0 using the weighted linear scoring function. Performance with maximum cost: optimize for performance within a maximum cost boundary. This is achieved using a maximum cost application scoped boundary condition enforced by the environment controller. Cost with minimum performance: optimize for minimum cost within a minimum performance boundary – See paragraphs [0621-0623].
Smith et al. (US Patent No. 12165026 B1) discloses determine a plurality of characteristic features corresponding to occurrences in the series of non-adjacent occurrences using a feature learning algorithm; generate a plurality of potential projected occurrences as a function of the plurality of characteristic features; weight each potential projected occurrence of the plurality of potential projected occurrences as a function of at least an optimization constraint in the process data; and select at least one projected occurrence as a function of the weighted plurality of potential projected occurrences – See col. 1, lines 32-41.
Tang et al. (US Pub. No. 2022/0004959 A1) discloses training a surrogate model based on the plurality of feature weight vectors in the feature weight matrix and the plurality of scores; constructing an optimization model comprising the surrogate model as an objective function; determining the optimal feature weight vector by solving the optimization model; and prefiltering pending rider-driver pairs in the ride-hailing platform based on the optimal feature weight vector – See Abstract and specification for more details.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MONGBAO NGUYEN whose telephone number is (571)270-7180. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached at 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MONGBAO NGUYEN/ Examiner, Art Unit 2192