Prosecution Insights
Last updated: April 19, 2026
Application No. 17/848,226

METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS TO IMPROVE NEURAL ARCHITECTURE SEARCHES

Final Rejection §101§102§103§112
Filed
Jun 23, 2022
Examiner
GORMLEY, AARON PATRICK
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
4y 4m
To Grant
0%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
3 granted / 5 resolved
+5.0% vs TC avg
Minimal -60% lift
Without
With
+-60.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
30 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
30.2%
-9.8% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is in response to the application filed 06/23/2022. Claims 1-7, 10-23, and 28-31 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1 is objected to because of the following informalities: “historical benchmark metrics characteristics associated with the candidate neural networks”, particularly “metrics characteristics” is improper grammar. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. The claims that invoke 35 U.S.C. 112(f) are: Claim 1: Limitation 1: “means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks” Limitation 2: “means for determining a dataset type and a task type associated with the target platform” Limitation 3: “means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type” Limitation 4: “means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators; and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model” “means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform” Claim 2 “wherein the means for identifying features is to identify (a) first tier values as performance metrics corresponding to an upper threshold and (b) second tier values as performance metrics corresponding to a lower threshold.” Claim 3 “further including means for performing benchmarking tests to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks.” Claim 4 “wherein the means for performing benchmarking tests is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics Claims 5-7 invoke 112(f) through dependence. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 and 28-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Limitations reciting the use of a “means” or equivalent generic placeholder that is modified by functional language, and not modified by sufficient structure within the claim, is interpreted as a means-plus-function limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (MPEP 2181(I) A.). For limitations interpreted under 35 U.S.C. 112(f) using means-plus-function language, the structure of the “means” or the equivalent generic placeholder substitute must be disclosed in the specification itself in a way that one skilled in the art will understand what structure will perform the recited function (MPEP 2181 (II.) A.). Additionally, for a computer-implemented means-plus-function limitation interpreted under 35 U.S.C. 112(f), the specification must disclose an algorithm for performing the claimed specific computer function (MPEP 2181 (II.) A.). Failure to adequately disclose either the structure or algorithm in sufficient detail in the specification for a computer-implemented means-plus-function limitation renders the claim indefinite under 35 U.S.C. 112(b). As noted in the claim interpretation section above, the claims invoke 35 U.S.C. 112(f) through their use of ‘means’. Regarding claim 1, Regarding “means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks”, while the instant specification discloses “an apparatus to identify candidate networks, including at least one memory, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to determine candidate networks corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate networks” ([0095]), no meaningful structure is defined for this ‘apparatus’ or ‘processor circuitry’ to perform these functions. Regarding “means for determining a dataset type and a task type associated with the target platform”, while the instant specification discloses similarity verification circuitry obtaining dataset type and task type information from a network knowledge database ([0034]), it does not disclose any meaningful structure for the ‘similarity verification circuitry’ or the ‘network knowledge database’. Regarding “means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type”, while the instant specification discloses using similarity verification circuitry to identify candidate networks based on target platform type, target workload type, and historical benchmarks ([0086]), it does not disclose performing this identification with candidate networks associated with a prior architecture search, as claimed. Thus, the specification doesn’t appear to disclose this function or any structures to perform it. Regarding “means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators; and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning mode”, while the instant specification discloses identifying first and second features associated with first and second networks selected from first and second performance metrics ([0086]), and extracting patterns that identify candidate architectures, it does not disclose identifying features that include performance metrics, nor does it disclose extracting patterns from candidate networks based on the features. Thus, the specification doesn’t appear to disclose this function or any structures to perform it. Regarding “means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform”, while the instant specification discloses feeding the first and second features into a network analyzer which then determines one or more candidate networks to execute on the target platform ([0095]), it does not disclose using patterns extracted from the candidate networks based on features to select a candidate neural network. Thus, the specification doesn’t appear to disclose this function or any structures to perform it. Regarding claim 2, Regarding “means for identifying features is to identify (a) first tier values as performance metrics corresponding to an upper threshold and (b) second tier values as performance metrics corresponding to a lower threshold”, while the instant specification discloses using example likelihood verification circuitry to categorize performance metrics into a first upper tier and a second lower tier ([0038]), no meaningful structure is defined for the ‘example likelihood verification circuitry’ that would enable this function. Regarding claim 3, Regarding “means for performing benchmarking tests to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks”, while the instant specification discloses using example benchmark evaluation circuitry to calculate updated performance metrics that can include operation information (latency, accuracy, power, etc.) ([0042]), it does not provide any meaningful structure for the ‘benchmark evaluation circuitry’ that would enable this function. Regarding claim 4, Regarding “means for performing benchmarking tests is to initiate the benchmarking tests corresponding to the target platform [[type]] to determine third performance metrics”, while the specification discloses using benchmark evaluation circuitry to run benchmarking tests corresponding to a target platform type to determine third performance metrics ([0089]), no meaningful structure is defined for the ‘benchmark evaluation circuitry’ that would enable this function. While paragraph [0046] of the instant specification broadly states that the apparatus and its various included means for performing functions can be implemented with generic processors, generic microprocessors, and / or generic FPGAs, this is extremely general computer hardware, insufficient for one of ordinary skill in the art to understand what specific structures are required to perform the specific recited functions. Thus, claims 1-4 are considered indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. These deficiencies are inherited by child claims 5-7. The terms “highest” and “lowest” in claims 28 and 29 are relative terms which render the claims indefinite. These terms are not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Paragraph [0059] of the instant specification, upon which these claims seem to rely, only specifies that these are relative measures of performance. It’s unclear what ranges of values or percentages define either the “lowest” or “highest” classes of metric values. The ‘highest’ class of metrics is interpreted as metrics with values better (by some correctness criteria) than metrics in the ‘lowest’ class. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-7, 10-23, and 28-31 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Limitations reciting the use of a “means” or equivalent generic placeholder that is modified by functional language, and not modified by sufficient structure within the claim, is interpreted as a means-plus-function limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (MPEP 2181(I) A.). For limitations interpreted under 35 U.S.C. 112(f) using means-plus-function language, the written description under 35 U.S.C. 112(a) must adequately link or associate particular structure, material, or acts to perform the function or it must be clear based on the facts of the application that one skilled in the art would have known what structure, material, or acts disclosed in the specification perform the recited function (MPEP 2163(II) A. (3)). Claims 1-7 recite computer-implemented means-plus-function limitations. As noted above, these claims are rejected under 35 U.S.C. 112(b) as being indefinite for failing to adequately disclose the corresponding structures or algorithms in sufficient detail in the specification. When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a). See MPEP § 2163.03, subsection VI. Thus, these claims are rejected under 35 U.S.C. 112(a) for lack of written description. Additionally, claim 1 recites new matter not disclosed in the instant specification. Regarding “means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type”, while the instant specification discloses using similarity verification circuitry to identify candidate networks based on target platform type, target workload type, and historical benchmarks ([0086]), it does not disclose performing this identification with candidate networks associated with a prior architecture search, as claimed. Regarding “means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators; and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning mode”, while the instant specification discloses identifying first and second features associated with first and second networks selected from first and second performance metrics ([0086]), and extracting patterns that identify candidate architectures, it does not disclose identifying features that include performance metrics, nor does it disclose extracting patterns from candidate networks based on the features. Regarding “means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform”, while the instant specification discloses feeding the first and second features into a network analyzer which then determines one or more candidate networks to execute on the target platform ([0095]), it does not disclose using patterns extracted from the candidate networks based on features to select a candidate neural network. Thus, the specification doesn’t appear to disclose claim 1 in its entirety. Substantially similar independent claims 10 and 19 are rejected under this same rationale. This deficiency is inherited by all dependent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 10-23, and 28-31 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to non-statutory subject matter without significantly more. Claim 1 Step 1: The claim recites “An apparatus”, and is therefore directed to the statutory category of article of manufacture Step 2A Prong 1: The claim recites the following judicial exception(s) means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks: This can be performed as a mental process. One can merely observe the performance metrics for a set of neural networks executed on a target platform, selecting a subset within a particular score range. means for determining a dataset type and a task type associated with the target platform: This can be performed as a mental process. One can merely think of what task and types of data might be most suitable for a given target platform. means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: This can be performed as a mental process. One can merely observe a candidate neural network that was devised from a previously conducted neural architecture search. means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators: This can be performed as a mental process. One can merely gauge the performance of each candidate neural network based on observed readings. extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model: This can be performed as a mental process. One can merely observe patterns within the features. means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: This can be performed as a mental process. One can merely choose a candidate neural network they wish to be executed on the target platform, based on the features / patterns. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s) means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks: As noted in the rejections under 35 U.S.C. 112, the specifications fails to disclose any meaningful structure for any of its ‘means’ limitations that invoke 35 U.S.C. 112(f). The only structure given is generic computer hardware. Thus, this and similar limitations amount to mere instruction to apply a judicial exception with generic computer hardware (MPEP 2106.05(f)). means for determining a dataset type and a task type associated with the target platform: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model: This is mere instruction to execute a judicial exception with generic computer hardware (means) and a generic data structure (machine learning model) (MPEP 2106.05(f)). means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks: As noted in the rejections under 35 U.S.C. 112, the specifications fails to disclose any meaningful structure for any of its ‘means’ limitations that invoke 35 U.S.C. 112(f). The only structure given is generic computer hardware. Thus, this and similar limitations amount to mere instruction to apply a judicial exception with generic computer hardware (MPEP 2106.05(f)). means for determining a dataset type and a task type associated with the target platform: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model: This is mere instruction to execute a judicial exception with generic computer hardware (means) and a generic data structure (machine learning model) (MPEP 2106.05(f)). means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: This is mere instruction to execute a judicial exception with generic computer hardware (MPEP 2106.05(f)). Claim 2 Step 1: The claim recites an article of manufacture, as in claim 1 Step 2A Prong 1: The claim recites the following further judicial exception(s) identify (a) first tier values as performance metrics corresponding to an upper threshold and (b) second tier values as performance metrics corresponding to a lower threshold: This can be performed as a mental process. One can merely identify networks in the first group with high performance metric values and networks in the second group with low performance metric values. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) the likelihood verification circuitry is to identify (a) first tier values as performance metrics corresponding to an upper threshold and (b) second tier values as performance metrics corresponding to a lower threshold: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) the likelihood verification circuitry is to identify (a) first tier values as performance metrics corresponding to an upper threshold and (b) second tier values as performance metrics corresponding to a lower threshold: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claim 3 Step 1: The claim recites an article of manufacture, as in claim 1 Step 2A Prong 1: The claim recites the following further judicial exception(s) means for performing benchmarking tests to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks: This can be performed as a mental process. One can merely count how long it takes for a candidate network to produce output. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) means for performing benchmarking tests to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) means for performing benchmarking tests to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claim 4 Step 1: The claim recites an article of manufacture, as in claim 3 Step 2A Prong 1: The claim recites the following further judicial exception(s) the means for performing benchmarking tests is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics: This can be performed as a mental process. One can merely count how long it takes for a candidate network to produce output while executing on the target platform. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) the means for performing benchmarking tests is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) the means for performing benchmarking tests is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claim 5 Step 1: The claim recites an article of manufacture, as in claim 4 Step 2A Prong 1: The claim recites the following further judicial exception(s) the third performance metrics include at least one of latency, accuracy, power consumption or memory bandwidth: Initiating benchmarking tests can still be performed mentally. One can merely count how long it takes for a candidate network to produce output (latency) while executing on the target platform type. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s) Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Claim 6 Step 1: The claim recites an article of manufacture, as in claim 3 Step 2A Prong 1: The claim recites the following further judicial exception(s) wherein the operation information corresponds to at least one of an operation type, a kernel size or an input size: Initiating benchmarking tests can still be performed as a mental process. One can merely count how long it takes for a candidate network to produce output for some particular type of input-output operation. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s) Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Claim 7 Step 1: The claim recites an article of manufacture, as in claim 1 Step 2A Prong 1: The claim recites the following further judicial exception(s) wherein the features include at least one of network adjacency features, layer connection information, or network graph information: Identifying features can still be performed as a mental process. One need only observe a network to derive network adjacency, layer connection, and network graph info features. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s) Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Claim 10 Step 1: The claim recites “An apparatus”, and is therefore directed to the statutory category of article of manufacture Step 2A Prong 1: The claim recites the following judicial exception(s) determine candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks: This can be performed as a mental process. One can merely observe the performance metrics for a set of neural networks executed on a target platform, selecting a subset within a particular score range. determine a dataset type and a task type associated with the target platform: This can be performed as a mental process. One can merely think of what task and types of data might be most suitable for a given target platform. identify one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: This can be performed as a mental process. One can merely observe a candidate neural network that was devised from a previously conducted neural architecture search. identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators: This can be performed as a mental process. One can merely gauge the performance of each candidate neural network based on observed readings. extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model: This can be performed as a mental process. One can merely observe patterns within the features. select, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: This can be performed as a mental process. One can merely choose a candidate neural network they wish to be executed on the target platform, based on the features / patterns. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s) interface circuitry; machine readable instructions; and at least one processor circuit to be programmed by the machine readable instructions: This is mere instruction to apply the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) interface circuitry; machine readable instructions; and at least one processor circuit to be programmed by the machine readable instructions: This is mere instruction to apply the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claim 11 Step 1: The claim recites an article of manufacture, as in claim 10 Step 2A Prong 1: The claim recites the following further judicial exception(s) identification of (a) performance metrics corresponding to an upper threshold as the first values, and (b) performance metrics corresponding to a lower threshold as the second values: This can be performed as a mental process. One can merely identify networks in the first group with high performance metric values and networks in the second group with low performance metric values. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) one or more of the at least one processor circuit is to cause identification of (a) performance metrics corresponding to an upper threshold as the first values, and (b) performance metrics corresponding to a lower threshold as the second values: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) one or more of the at least one processor circuit is to cause identification of (a) performance metrics corresponding to an upper threshold as the first values, and (b) performance metrics corresponding to a lower threshold as the second values: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claims 12-16 Step 1: Claims 12-16 recite an article of manufacture, as in claim 10. Step 2A Prong 1: Claims 12-16 recite the same judicial exception(s) as claims 3-7, respectively. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through any additional elements. The analysis of claims 12-16 at this step mirrors that of claims 3-7, respectively, with the exception that claims 12-16 are directed to “at least one memory; machine readable instructions”, said instructions containing the methods of claims 3-7. This is a mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)). Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s). The analysis of claims 12-16 at this step mirrors that of claims 3-7, with the exception that claims 12-16 are directed to “at least one memory; machine readable instructions”, said instructions containing the methods of claims 3-7. This is mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)). Claim 17 Step 1: The claim recites an article of manufacture, as in claim 10. Step 2A Prong 1: The claim recites the no further judicial exception(s) Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) one or more of the at least one processor circuit is to query a network knowledge database for prior modification information corresponding to the candidate networks: This is mere data gathering and amounts to insignificant extra-solution activity (MPEP 2106.05(g)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) one or more of the at least one processor circuit is to query a network knowledge database for prior modification information corresponding to the candidate networks: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.). Claim 18 Step 1: The claim recites an article of manufacture, as in claim 17 Step 2A Prong 1: The claim recites the following further judicial exception(s) one or more of the at least one processor circuit is to initiate a starting search point by applying changes to the candidate networks: This can be performed as a mental process. One can merely imagine a modified version of one of the candidate networks as a starting point for further mutations in a network search. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) one or more of the at least one processor circuit is to initiate a starting search point by applying changes to the candidate networks: This is mere instruction to apply a judicial exception with generic computer hardware (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) one or more of the at least one processor circuit is to initiate a starting search point by applying changes to the candidate networks: This is mere instruction to apply a judicial exception with generic computer hardware (MPEP 2106.05(f)). Claim 19 Step 1: The claim recites “A non-transitory machine readable storage medium”, and is therefore directed to the statutory category of article of manufacture Step 2A Prong 1: The claim recites the following judicial exception(s) determine candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks: This can be performed as a mental process. One can merely observe the performance metrics for a set of neural networks executed on a target platform, selecting a subset within a particular score range. determine a dataset type and a task type associated with the target platform: This can be performed as a mental process. One can merely think of what task and types of data might be most suitable for a given target platform. identify one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: This can be performed as a mental process. One can merely observe a candidate neural network that was devised from a previously conducted neural architecture search. identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators: This can be performed as a mental process. One can merely gauge the performance of each candidate neural network based on observed readings. extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model: This can be performed as a mental process. One can merely observe patterns within the features. select, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: This can be performed as a mental process. One can merely choose a candidate neural network they wish to be executed on the target platform, based on the features / patterns. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s) A non-transitory machine readable storage medium: This is mere instruction to apply the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) A non-transitory machine readable storage medium: This is mere instruction to apply the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claim 20 Step 1: The claim recites an article of manufacture, as in claim 19 Step 2A Prong 1: The claim recites the following further judicial exception(s) identify of (a) performance metrics corresponding to an upper threshold as first tier values, and (b) performance metrics corresponding to a lower threshold as second tier values: This can be performed as a mental process. One can merely identify networks in the first group with high performance metric values and networks in the second group with low performance metric values. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) the instructions, when executed, cause the processor circuitry to identify (a) performance metrics corresponding to an upper threshold as first tier values, and (b) performance metrics corresponding to a lower threshold as the second tier values: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) the instructions, when executed, cause the processor circuitry to identify (a) performance metrics corresponding to an upper threshold as first tier values, and (b) performance metrics corresponding to a lower threshold as the second tier values: This is mere instruction to execute the judicial exceptions with generic computer hardware (MPEP 2106.05(f)). Claims 21-23 Step 1: Claims 21-23 recite an article of manufacture, as in claim 19. Step 2A Prong 1: Claims 21-23 recite the same judicial exception(s) as claims 3-5, respectively. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through any additional elements. The analysis of claims 21-23 at this step mirrors that of claims 3-5, respectively, with the exception that claims 21-23 are directed to “non-transitory machine readable storage medium comprising instructions that, when executed, cause processor circuitry to at least”, said instructions containing the methods of claims 3-5. This is a mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)). Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s). The analysis of claims 21-23 at this step mirrors that of claims 3-5, with the exception that claims 21-23 are directed to “non-transitory machine readable storage medium comprising instructions that, when executed, cause processor circuitry to at least”, said instructions containing the methods of claims 3-5. This is mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)). Claim 28 Step 1: The claim recites an article of manufacture, as in claim 10 Step 2A Prong 1: The claim recites the following further judicial exception(s) wherein one or more of the at least one processor circuit is to label candidate neural networks associated with highest and lowest pareto metrics to facilitate convergence of the machine learning model in subsequent network evaluations: This can be performed as a mental process. One can mentally label observed networks associated with highest and lowest pareto metrics. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) wherein one or more of the at least one processor circuit is to label candidate neural networks associated with highest and lowest pareto metrics to facilitate convergence of the machine learning model in subsequent network evaluations: This is mere instruction to execute a judicial exception with generic computer hardware (processor circuit(s)), to facilitate convergence in a highly generic manner (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) wherein one or more of the at least one processor circuit is to label candidate neural networks associated with highest and lowest pareto metrics to facilitate convergence of the machine learning model in subsequent network evaluations: This is mere instruction to execute a judicial exception with generic computer hardware (processor circuit(s)), to facilitate convergence in a highly generic manner (MPEP 2106.05(f)). Claim 29 Step 1: The claim recites an article of manufacture, as in claim 29 Step 2A Prong 1: The claim recites the following further judicial exception(s) wherein the highest and lowest pareto metrics identify relative values of architecture characteristics, the architecture characteristics corresponding to at least one of an accuracy or a latency: Labeling the candidate networks associated with the highest and lowest pareto metrics can still be performed as a mental process. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s) Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Claim 30 Step 1: The claim recites an article of manufacture, as in claim 10 Step 2A Prong 1: The claim recites the following further judicial exception(s) wherein the task type is at least one of a facial recognition task or a vehicle identification task: Determining a task type associated with a target platform and identifying candidate networks associated with a prior search involving the task type can still be performed as mental processes. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s) Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Claim 31 Step 1: The claim recites an article of manufacture, as in claim 10 Step 2A Prong 1: The claim recites the following further judicial exception(s) wherein the dataset type is at least one of a CIFAR-10 dataset or an ImageNet dataset: Determining a dataset type associated with a target platform and identifying candidate networks associated with a prior search involving the dataset type can still be performed as mental processes. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s) Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-7, 10-16, 19-23, and 30 are rejected under 35 U.S.C. 102 as being anticipated by Husain (ADJUSTING AUTOMATED NEURAL NETWORK GENERATION BASED ON EVALUATION OF CANDIDATE NEURAL NETWORKS, published 4/25/2019, US 2019/0122119 A1). Regarding claim 1, Husain teaches [a]n apparatus, comprising: means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier” (Husain, [0061]) “the trained classifier 260 may provide a user (or a system) with a level of confidence in reliability (workload characteristic) of a new or unknown neural network, such as a candidate neural network generated by the genetic algorithm 110. Further, the trained classifier 260 may enable the user (or system) to discard, cease using, or further train a new or unknown neural network based on a classification result indicating that the neural network is not expected to be reliable or is not sufficiently trained” (Husain, [0071]). A workload characteristic to be executed by the target platform is a target platform characteristic. and historical benchmark metrics characteristics associated with the candidate neural networks: “each epoch of the genetic algorithm may produce a particular number of candidate neural networks based on crossover and mutation operations that are performed on the candidate neural networks of a preceding epoch” (Husain, [0005]) “To illustrate, the genetic algorithm may store normalized vector representations of such neural networks. If a first neural network (candidate network) is "similar" to a second neural network that has previously been determined to be unreliable or low-performing (historical benchmark metrics), then the first neural network may also be classified as unreliable or low-performing without executing the classifier on the first neural network … Such similarity metrics may be used as an input filter to the classifier. In this example, if a candidate neural network is not "different enough" from a known "bad" (e.g., unreliable and/or low-performing) neural network, then the classifier is not executed. Instead, the candidate neural network is classified (e.g., labeled) as "bad" based on the similarity metric (e.g., without evaluating the candidate neural network using the classifier)” (Husain, [0006]). means for determining a dataset type and a task type associated with the target platform: “Accordingly, the trained classifier 260 can be configured to generate classification results classifying an unknown (e.g., unlabeled) neural network in any of these categories 201-203.” (Husain, [0069]). The trained classifier labels previously unlabeled neural networks. “For example, the trained classifier 260 may be configured to generate a classification result indicating whether an unknown neural network is expected to be reliable, whether the unknown neural network is sufficiently trained, a type of data (dataset type) with which the unknown neural network is associated, a type of analysis (task type) performed by the unknown neural network (e.g., classification, regression, reinforcement learning, etc.), expected performance of the unknown neural network, or a combination thereof.” (Husain, [0070]). Note the classification result may be a combination of dataset type and task type. means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: “Supervised training data 241 including vectors 242 and the category labels 212 (a combination of the dataset type and the task type) is provided to classifier generation and training instructions 250 … The classifier generation and training instructions 250 generate a trained classifier 260 (e.g., the trained classifier 101) based on the supervised training data 241” (Husain, [0068]) “A classifier trained using such supervised training data may be configured to distinguish (identify) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) means for identifying features to: identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators; and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model “In one example, a classifier is trained using supervised training data descriptive of neural networks that have known reliability or other characteristics. For example, the supervised training data may include feature vectors or other data representing a first set of neural networks that are known (e.g., labeled) to have historically provided reliable (and/or high-performing) results (performance metrics), and the supervised training data may include feature vectors or other data representing a second set of neural networks that are known (e.g., labeled) to have historically provided unreliable (and/or low-performing) results (performance metrics). A classifier (machine learning model) trained using such supervised training data may be configured to distinguish (extract patterns) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) “In a particular implementation, the processors 206 are configured to execute vector generation (feature identific[ation]) instructions 220 to generate vector representations of the data structures 210 (neural networks)” (Husain, [0063]) means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: “A classifier trained using such supervised training data may be configured to distinguish (extract patterns) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier (e.g., the trained classifier 101 of FIG. 1) based on supervised training data associated with a set of neural networks” (Husain, [0061]). Regarding claim 2, the rejection of claim 1 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the means for identifying features is to identify (a) first tier values as performance metrics corresponding to an upper threshold and (b) second tier values as performance metrics corresponding to a lower threshold: “In one example, a classifier is trained using supervised training data descriptive of neural networks that have known reliability or other characteristics. For example, the supervised training data may include feature vectors or other data representing a first set of neural networks that are known (e.g., labeled) to have historically provided reliable (and/or high-performing) (first tier) results, and the supervised training data may include feature vectors or other data representing a second set of neural networks that are known (e.g., labeled) to have historically provided unreliable (and/or low-performing) (second tier) results” (Husain, [0004]). Regarding claim 3, the rejection of claim 1 in view of Husain is incorporated. Husain further teaches an apparatus, further including means for performing benchmarking tests to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks: “The present disclosure provides systems and methods to predict the reliability and performance of a neural network” (Husain, [0002]) “Normalized vector representations of the neural networks in the input set 120 may be generated. The normalized vectors may be input to the trained classifier 101, which may output data indicating the expected reliability or performance of each of the neural networks” (Husain, [0035]); “In various examples, performance may be measured in terms a number of layers of the neural network (operation information), processing time of the neural network, capability of the neural network to be parallelized, and so forth” (Husain, [0003]). As noted by paragraph [0091] of the instant specification, input size is one type of operation information. “The genetic algorithm may be adapted in response to the classifier determining that a particular candidate neural network is predicted to be unreliable or have low (i.e., poor) performance” (Husain, [0005]). Regarding claim 4, the rejection of claim 3 in view of Husain is incorporated. Husain further discloses an apparatus, wherein the means for performing benchmarking tests is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics: “The trained classifier 101 may receive normalized vectors corresponding to one, some, or all models of a given epoch and may provide data indicating each neural network's expected reliability and/or performance (third performance metrics)” (Husain, [0018]) PNG media_image1.png 1304 850 media_image1.png Greyscale “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier (e.g., the trained classifier 101 of FIG. 1) based on supervised training data associated with a set of neural networks” (Husain, [0061]) Regarding claim 5, the rejection of claim 4 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the third performance metrics include at least one of latency, accuracy, power consumption or memory bandwidth: “performance (third performance metrics) may be measured in terms a number of layers of the neural network, processing time (latency) of the neural network, capability of the neural network to be parallelized, and so forth. Performance may also encompass the concept of "correctness" (accuracy) of the results. As used herein, correctness refers to formal correctness of behavior of the neural network” (Husain, [0003]). Regarding claim 6, the rejection of claim 3 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the operation information corresponds to at least one of an operation type, a kernel size or an input size: “in this context, “reliability” refers generally to the ability of a neural network to generate accurate results. For example, reliability may be measured in terms of robustness of the neural network to a range of input values, ability of the neural network to generate a result that has a relatively small difference (e.g., less than a threshold) from an expected or known value, ability of the neural network to generate a confidence score or value that aligns with (e.g., are within a threshold of) an expected confidence value (operation type), and so forth” (Husain, [0003]) “Normalized vector representations of the neural networks in the input set 120 may be generated. The normalized vectors may be input to the trained classifier 101, which may output data indicating the expected reliability or performance of each of the neural networks” (Husain, [0035]); “In various examples, performance may be measured in terms a number of layers of the neural network (input size), processing time of the neural network, capability of the neural network to be parallelized, and so forth” (Husain, [0003]) Regarding claim 7, the rejection of claim 1 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the features include at least one of network adjacency features, layer connection information, or network graph information: “Each data structure 210 includes information describing the topology of a neural network as well as other characteristics of the neural network, such as link weight, bias values, activation functions, and so forth” (Husain, [0062]) “In a particular implementation, the processors 206 are configured to execute vector generation instructions 220 to generate vector representations (features) of the data structures 210 … For example, in FIG. 2, a particular example of a first vector representation 221 corresponding to the first data structure 211 is shown. The first vector representation 221 includes a plurality of fields which may include values representing particular features of the first data structure 211” (Husain, [0063]) “the first vector representation 221 includes other fields, such as … a third field 228 representing the first link (Link_1) (layer connection information / network adjacency features), a fourth field 229 representing an Nth link (Link_N), and so forth. Additionally, the first vector representation 221 may include a header field 225 providing information descriptive of a vector encoding scheme used to generate the first vector representation 221 based on the first data structure 211. For example, the header field 225 may include information indicating a number of nodes present in the first data structure 211 (network graph information) or a number of nodes represented in the first vector representation 221” (Husain, [0064]). Regarding claim 10, Husain teaches [a]n apparatus to identify candidate networks, comprising: interface circuitry: “in particular implementations, the user can configure aspects of the genetic algorithm 110, such as via input to graphical user interfaces (GUis).” (Husain, [0029]) machine readable instructions: “In conjunction with the described aspects, a computer-readable storage device stores instructions that, when executed, cause a computer to perform operations” (Husain, [0096]) at least one processor circuit to be programmed by the machine readable instructions to: “It is to be understood that operations described herein as being performed by the genetic algorithm 110 or the trained classifier 101 may be performed by a device executing the genetic algorithm 110 or the trained classifier 101. In some embodiments, the genetic algorithm 110 is executed on a different device, processor (e.g., central processor unit (CPU), graphics processing unit (GPU) or other type of processor), processor core, and/or thread ( e.g., hardware or software thread) than the trained classifier 101” (Husain, [0013]) “Computer program instructions may be loaded onto a computer or other programmable data processing apparatus to produce a machine, such that the instructions that execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks” (Husain, [0100]) determine candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, “The genetic algorithm 110 may automatically generate a neural network model of a particular data set, such as an illustrative input data set 102, based on a recursive neuroevolutionary search process.” (Husain, [0014]) “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier” (Husain, [0061]) “the trained classifier 260 may provide a user (or a system) with a level of confidence in reliability (workload characteristic) of a new or unknown neural network, such as a candidate neural network generated by the genetic algorithm 110. Further, the trained classifier 260 may enable the user (or system) to discard, cease using, or further train a new or unknown neural network based on a classification result indicating that the neural network is not expected to be reliable or is not sufficiently trained” (Husain, [0071]). A workload characteristic to be executed by the target platform is a target platform characteristic. and historical benchmark metrics characteristics associated with the candidate neural networks: “each epoch of the genetic algorithm may produce a particular number of candidate neural networks based on crossover and mutation operations that are performed on the candidate neural networks of a preceding epoch” (Husain, [0005]) “To illustrate, the genetic algorithm may store normalized vector representations of such neural networks. If a first neural network (candidate network) is "similar" to a second neural network that has previously been determined to be unreliable or low-performing (historical benchmark metrics), then the first neural network may also be classified as unreliable or low-performing without executing the classifier on the first neural network … Such similarity metrics may be used as an input filter to the classifier. In this example, if a candidate neural network is not "different enough" from a known "bad" (e.g., unreliable and/or low-performing) neural network, then the classifier is not executed. Instead, the candidate neural network is classified (e.g., labeled) as "bad" based on the similarity metric (e.g., without evaluating the candidate neural network using the classifier)” (Husain, [0006]). determine a dataset type and a task type associated with the target platform: “Accordingly, the trained classifier 260 can be configured to generate classification results classifying an unknown (e.g., unlabeled) neural network in any of these categories 201-203.” (Husain, [0069]). The trained classifier labels previously unlabeled neural networks. “For example, the trained classifier 260 may be configured to generate a classification result indicating whether an unknown neural network is expected to be reliable, whether the unknown neural network is sufficiently trained, a type of data (dataset type) with which the unknown neural network is associated, a type of analysis (task type) performed by the unknown neural network (e.g., classification, regression, reinforcement learning, etc.), expected performance of the unknown neural network, or a combination thereof.” (Husain, [0070]). Note the classification result may be a combination of dataset type and task type. identify one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: “Supervised training data 241 including vectors 242 and the category labels 212 (a combination of the dataset type and the task type) is provided to classifier generation and training instructions 250 … The classifier generation and training instructions 250 generate a trained classifier 260 (e.g., the trained classifier 101) based on the supervised training data 241” (Husain, [0068]) “A classifier trained using such supervised training data may be configured to distinguish (identify) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators; extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model “In one example, a classifier is trained using supervised training data descriptive of neural networks that have known reliability or other characteristics. For example, the supervised training data may include feature vectors or other data representing a first set of neural networks that are known (e.g., labeled) to have historically provided reliable (and/or high-performing) results (performance metrics), and the supervised training data may include feature vectors or other data representing a second set of neural networks that are known (e.g., labeled) to have historically provided unreliable (and/or low-performing) results (performance metrics). A classifier (machine learning model) trained using such supervised training data may be configured to distinguish (extract patterns) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) “In a particular implementation, the processors 206 are configured to execute vector generation (feature identific[ation]) instructions 220 to generate vector representations of the data structures 210 (neural networks)” (Husain, [0063]) select, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: “A classifier trained using such supervised training data may be configured to distinguish (extract patterns) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier (e.g., the trained classifier 101 of FIG. 1) based on supervised training data associated with a set of neural networks” (Husain, [0061]). Regarding claim 11, the rejection of claim 10 in view of Husain is incorporated. Husain further teaches an apparatus, wherein one or more of the at least one processor circuit is to cause identification of (a) performance metrics corresponding to an upper threshold, and (b) performance metrics corresponding to a lower threshold: “In one example, a classifier is trained using supervised training data descriptive of neural networks that have known reliability or other characteristics. For example, the supervised training data may include feature vectors or other data representing a first set of neural networks that are known (e.g., labeled) to have historically provided reliable (and/or high-performing) results, and the supervised training data may include feature vectors or other data representing a second set of neural networks that are known (e.g., labeled) to have historically provided unreliable (and/or low-performing) results” (Husain, [0004]). Regarding claim 12, the rejection of claim 10 in view of Husain is incorporated. Husain further teaches an apparatus, wherein one or more of the at least one processing circuit is to initiate benchmarking tests corresponding to operation information extracted from the candidate networks: “The present disclosure provides systems and methods to predict the reliability and performance of a neural network” (Husain, [0002]) “Normalized vector representations of the neural networks in the input set 120 may be generated. The normalized vectors may be input to the trained classifier 101, which may output data indicating the expected reliability or performance of each of the neural networks” (Husain, [0035]); “In various examples, performance may be measured in terms a number of layers of the neural network (operation information), processing time of the neural network, capability of the neural network to be parallelized, and so forth” (Husain, [0003]). As noted by paragraph [0091] of the instant specification, input size is one type of operation information. “The genetic algorithm may be adapted in response to the classifier determining that a particular candidate neural network is predicted to be unreliable or have low (i.e., poor) performance” (Husain, [0005]). Regarding claim 13, the rejection of claim 12 in view of Husain is incorporated. Husain further discloses an apparatus, wherein one or more of the at least one processor circuit is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics: “The trained classifier 101 may receive normalized vectors corresponding to one, some, or all models of a given epoch and may provide data indicating each neural network's expected reliability and/or performance (third performance metrics)” (Husain, [0018]) PNG media_image1.png 1304 850 media_image1.png Greyscale “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier (e.g., the trained classifier 101 of FIG. 1) based on supervised training data associated with a set of neural networks” (Husain, [0061]) Regarding claim 14, the rejection of claim 3 in view of Husain is incorporated. Husain further teaches an apparatus, wherein one or more of the at least one processor circuit is to identify the third performance metrics as at least one of latency, accuracy, power consumption or memory bandwidth: “performance (third performance metrics) may be measured in terms a number of layers of the neural network, processing time (latency) of the neural network, capability of the neural network to be parallelized, and so forth. Performance may also encompass the concept of "correctness" (accuracy) of the results. As used herein, correctness refers to formal correctness of behavior of the neural network” (Husain, [0003]). Regarding claim 15, the rejection of claim 12 in view of Husain is incorporated. Husain further teaches an apparatus, wherein one or more of the at least one processor circuit is to identify operation information as at least one of an operation type, a kernel size or an input size: “in this context, “reliability” refers generally to the ability of a neural network to generate accurate results. For example, reliability may be measured in terms of robustness of the neural network to a range of input values, ability of the neural network to generate a result that has a relatively small difference (e.g., less than a threshold) from an expected or known value, ability of the neural network to generate a confidence score or value that aligns with (e.g., are within a threshold of) an expected confidence value (operation type), and so forth” (Husain, [0003]) “Normalized vector representations of the neural networks in the input set 120 may be generated. The normalized vectors may be input to the trained classifier 101, which may output data indicating the expected reliability or performance of each of the neural networks” (Husain, [0035]); “In various examples, performance may be measured in terms a number of layers of the neural network (input size), processing time of the neural network, capability of the neural network to be parallelized, and so forth” (Husain, [0003]) Regarding claim 16, the rejection of claim 10 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the features include at least one of network adjacency features, layer connection information, or network graph information: “Each data structure 210 includes information describing the topology of a neural network as well as other characteristics of the neural network, such as link weight, bias values, activation functions, and so forth” (Husain, [0062]) “In a particular implementation, the processors 206 are configured to execute vector generation instructions 220 to generate vector representations (features) of the data structures 210 … For example, in FIG. 2, a particular example of a first vector representation 221 corresponding to the first data structure 211 is shown. The first vector representation 221 includes a plurality of fields which may include values representing particular features of the first data structure 211” (Husain, [0063]) “the first vector representation 221 includes other fields, such as … a third field 228 representing the first link (Link_1) (layer connection information / network adjacency features), a fourth field 229 representing an Nth link (Link_N), and so forth. Additionally, the first vector representation 221 may include a header field 225 providing information descriptive of a vector encoding scheme used to generate the first vector representation 221 based on the first data structure 211. For example, the header field 225 may include information indicating a number of nodes present in the first data structure 211 (network graph information) or a number of nodes represented in the first vector representation 221” (Husain, [0064]). Regarding claim 19, Husain teaches an apparatus, comprising: A non-transitory machine readable storage medium comprising instructions that, when executed, cause processor circuitry to at least: “In conjunction with the described aspects, a computer-readable storage device stores instructions that, when executed, cause a computer to perform operations” (Husain, [0096]) “As used herein, a "computer-readable storage medium" or "computer-readable storage device" is not a signal (non-transitory)” (Husain, [0098]) “Computer program instructions may be loaded onto a computer or other programmable data processing apparatus to produce a machine, such that the instructions that execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks” (Husain, [0100]) determine candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, “The genetic algorithm 110 may automatically generate a neural network model of a particular data set, such as an illustrative input data set 102, based on a recursive neuroevolutionary search process.” (Husain, [0014]) “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier” (Husain, [0061]) “the trained classifier 260 may provide a user (or a system) with a level of confidence in reliability (workload characteristic) of a new or unknown neural network, such as a candidate neural network generated by the genetic algorithm 110. Further, the trained classifier 260 may enable the user (or system) to discard, cease using, or further train a new or unknown neural network based on a classification result indicating that the neural network is not expected to be reliable or is not sufficiently trained” (Husain, [0071]). A workload characteristic to be executed by the target platform is a target platform characteristic. and historical benchmark metrics characteristics associated with the candidate neural networks: “each epoch of the genetic algorithm may produce a particular number of candidate neural networks based on crossover and mutation operations that are performed on the candidate neural networks of a preceding epoch” (Husain, [0005]) “To illustrate, the genetic algorithm may store normalized vector representations of such neural networks. If a first neural network (candidate network) is "similar" to a second neural network that has previously been determined to be unreliable or low-performing (historical benchmark metrics), then the first neural network may also be classified as unreliable or low-performing without executing the classifier on the first neural network … Such similarity metrics may be used as an input filter to the classifier. In this example, if a candidate neural network is not "different enough" from a known "bad" (e.g., unreliable and/or low-performing) neural network, then the classifier is not executed. Instead, the candidate neural network is classified (e.g., labeled) as "bad" based on the similarity metric (e.g., without evaluating the candidate neural network using the classifier)” (Husain, [0006]). determine a dataset type and a task type associated with the target platform: “Accordingly, the trained classifier 260 can be configured to generate classification results classifying an unknown (e.g., unlabeled) neural network in any of these categories 201-203.” (Husain, [0069]). The trained classifier labels previously unlabeled neural networks. “For example, the trained classifier 260 may be configured to generate a classification result indicating whether an unknown neural network is expected to be reliable, whether the unknown neural network is sufficiently trained, a type of data (dataset type) with which the unknown neural network is associated, a type of analysis (task type) performed by the unknown neural network (e.g., classification, regression, reinforcement learning, etc.), expected performance of the unknown neural network, or a combination thereof.” (Husain, [0070]). Note the classification result may be a combination of dataset type and task type. identify one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type: “Supervised training data 241 including vectors 242 and the category labels 212 (a combination of the dataset type and the task type) is provided to classifier generation and training instructions 250 … The classifier generation and training instructions 250 generate a trained classifier 260 (e.g., the trained classifier 101) based on the supervised training data 241” (Husain, [0068]) “A classifier trained using such supervised training data may be configured to distinguish (identify) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators; extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model “In one example, a classifier is trained using supervised training data descriptive of neural networks that have known reliability or other characteristics. For example, the supervised training data may include feature vectors or other data representing a first set of neural networks that are known (e.g., labeled) to have historically provided reliable (and/or high-performing) results (performance metrics), and the supervised training data may include feature vectors or other data representing a second set of neural networks that are known (e.g., labeled) to have historically provided unreliable (and/or low-performing) results (performance metrics). A classifier (machine learning model) trained using such supervised training data may be configured to distinguish (extract patterns) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) “In a particular implementation, the processors 206 are configured to execute vector generation (feature identific[ation]) instructions 220 to generate vector representations of the data structures 210 (neural networks)” (Husain, [0063]) select, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform: “A classifier trained using such supervised training data may be configured to distinguish (extract patterns) neural networks that are expected to provide reliable (and/or high-performing) results from neural networks that are not expected to provide reliable (and or high-performing) results” (Husain, [0004]) “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier (e.g., the trained classifier 101 of FIG. 1) based on supervised training data associated with a set of neural networks” (Husain, [0061]). Regarding claim 20, the rejection of claim 19 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the instructions, when executed, cause the processor circuitry to identify (a) performance metrics corresponding to an upper threshold as first tier values, and (b) performance metrics corresponding to a lower threshold as second tier values: “In one example, a classifier is trained using supervised training data descriptive of neural networks that have known reliability or other characteristics. For example, the supervised training data may include feature vectors or other data representing a first set of neural networks that are known (e.g., labeled) to have historically provided reliable (and/or high-performing) results (first tier values), and the supervised training data may include feature vectors or other data representing a second set of neural networks that are known (e.g., labeled) to have historically provided unreliable (and/or low-performing) results (second tier values)” (Husain, [0004]). Regarding claim 21, the rejection of claim 19 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the instructions, when executed, cause the processor circuitry to initiate benchmarking tests corresponding to operation information extracted from the candidate neural networks: “The present disclosure provides systems and methods to predict the reliability and performance of a neural network” (Husain, [0002]) “Normalized vector representations of the neural networks in the input set 120 may be generated. The normalized vectors may be input to the trained classifier 101, which may output data indicating the expected reliability or performance of each of the neural networks” (Husain, [0035]); “In various examples, performance may be measured in terms a number of layers of the neural network (operation information), processing time of the neural network, capability of the neural network to be parallelized, and so forth” (Husain, [0003]). As noted by paragraph [0091] of the instant specification, input size is one type of operation information. “The genetic algorithm may be adapted in response to the classifier determining that a particular candidate neural network is predicted to be unreliable or have low (i.e., poor) performance” (Husain, [0005]). Regarding claim 22, the rejection of claim 21 in view of Husain is incorporated. Husain further discloses an apparatus, wherein the instructions, when executed, cause the processor circuitry to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics: “The trained classifier 101 may receive normalized vectors corresponding to one, some, or all models of a given epoch and may provide data indicating each neural network's expected reliability and/or performance (third performance metrics)” (Husain, [0018]) PNG media_image1.png 1304 850 media_image1.png Greyscale “FIG. 2 illustrates a particular example of a system 200 (target platform) that is operable to generate a trained classifier (e.g., the trained classifier 101 of FIG. 1) based on supervised training data associated with a set of neural networks” (Husain, [0061]) Regarding claim 23, the rejection of claim 22 in view of Husain is incorporated. Husain further teaches an apparatus, wherein the instructions, when executed, cause the processor circuitry to identify the third performance metrics as at least one of latency, accuracy, power consumption or memory bandwidth: “performance (third performance metrics) may be measured in terms a number of layers of the neural network, processing time (latency) of the neural network, capability of the neural network to be parallelized, and so forth. Performance may also encompass the concept of "correctness" (accuracy) of the results. As used herein, correctness refers to formal correctness of behavior of the neural network” (Husain, [0003]). Regarding claim 30, the rejection of claim 10 in view of Husain is incorporated. Husain further discloses an apparatus, wherein the task type is at least one of a facial recognition task or a vehicle identification task: “Advances in machine learning have enabled computing devices to solve complex problems in many fields. For example, image analysis ( e.g., face recognition), natural language processing, and many other fields have benefitted from the use of machine learning techniques. For certain types of problems, advanced computing techniques, such as genetic algorithms or backpropagation, may be available to develop a neural network. In one example, a genetic algorithm may apply neuroevolutionary techniques over multiple epochs to evolve candidate neural networks to model a training data set.” (Husain, [0001]) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Husain (ADJUSTING AUTOMATED NEURAL NETWORK GENERATION BASED ON EVALUATION OF CANDIDATE NEURAL NETWORKS, published 4/25/2019, US 2019/0122119 A1) in view of Wilson et al. (SYSTEMS AND METHODS FOR CONSTRUCTING AND APPLYING SYNAPTIC NETWORKS, published 12/1/2016, US 2016/0350834 A1). Regarding claim 17, the rejection of claim 10 in view of Husain is incorporated. Husain further teaches an apparatus, wherein one or more of the at least one processor circuit is to query a network knowledge database for prior modification information corresponding to the candidate networks: “During a configuration stage of operation, a user may specify the input data set 102 or data sources from which the input data set 102 is determined” (Husain, [0028]) “For the initial epoch of the genetic algorithm 110, the topologies of the models in the input set 120 may be randomly or pseudo-randomly generated within constraints specified by any previously input configuration settings (prior modification information)” (Husain, [0033]). The initial configuration of a network is a form of prior modification information. “The genetic algorithm 110 may automatically assign an activation function, an aggregation function, a bias, connection weights, etc. to each model of the input set 120 for the initial epoch … The single activation function may be selected based on configuration data (prior modification information)” (Husain, [0034]) While Husain fails to disclose the further limitations of the claim, Wilson teaches an apparatus, wherein one or more of the at least one processor circuit is to query a network knowledge database for prior modification information corresponding to the candidate networks: “At step S802, the system 100 constructs a synaptic network that includes defining item nodes 300, attribute nodes 302, and person nodes 304 from one or more primary data sources, such as webpages, review sites, social media pages, and the like” (Wilson, [0090]). The initial network is constructed from primary data sources. “Search engines may output lists of hyperlinks for web pages that include information of interest. Some search engines base the determination of corresponding hyperlinks on a search query entered by the user. The goal of the search engine is to return links for high quality, relevant sites based on the search query. Most commonly, search engines accomplish this by matching the terms in the search query to a database (knowledge network database) of stored web pages or web page content. Web pages that include the terms in the search query are considered "hits" and are included in the list of hyperlinks presented to the user” (Wilson, [0002]). “For each item in the directory, the system 100 runs a series of search queries in various search engines, each query restricted to results for the content site of interest, such as dine.com. The search results are parsed and the URLs for the relevant cached pages are retrieved. The cached pages are then retrieved and in a repository, after which they are parsed based on the name, city, phone number, and other data fields associated with a venue of interest. In this manner the cached review page for the venue of interest may be identified” (Wilson, [0067]) Wilson relates to circuitry for the construction and evaluation of neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Husain to obtain prior modification network data from a database, as disclosed by Wilson. Database information can be matched to user queries to provide high-quality, relevant information. See Wilson, [0003]. Regarding claim 18, the rejection of claim 17 in view of Husain and Wilson is incorporated. Husain further discloses an apparatus, wherein one or more of the at least one processor circuit is to initiate a starting search point by applying changes to the candidate networks: “For the initial epoch (starting search point) of the genetic algorithm 110, the topologies of the models in the input set 120 may be randomly or pseudo-randomly generated within constraints specified by any previously input configuration settings (changes)” (Husain, [0033]). The initial configuration of a network is a form of prior modification information. “The genetic algorithm 110 may automatically assign an activation function, an aggregation function, a bias, connection weights, etc. to each model of the input set 120 for the initial epoch … The single activation function may be selected based on configuration data (changes)” (Husain, [0034]) Claims 28-29 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Husain (ADJUSTING AUTOMATED NEURAL NETWORK GENERATION BASED ON EVALUATION OF CANDIDATE NEURAL NETWORKS, published 4/25/2019, US 2019/0122119 A1) in view of Tan et al. (NEURAL ARCHITECTURE SEARCH WITH FACTORIZED HIERARCHICAL SEARCH SPACE, published 5/7/2020, US 2020/0143227 A1), hereafter referred to as Tan. Regarding claim 28, the rejection of claim 10 in view of Husain is incorporated. Husain further discloses an apparatus, wherein one or more of the at least one processor circuit is to label candidate neural networks associated with highest and lowest pareto metrics to facilitate convergence of the machine learning model in subsequent network evaluations: “A classifier trained using such supervised training data may be configured to distinguish neural networks that are expected to provide reliable (and/or high-performing) results (highest metrics) from neural networks that are not expected to provide reliable (and or high-performing) results (lowest metrics)” (Husain, [0004]) “it may be possible for the "traits" of an unreliable or low-performing neural network to survive for several epochs of the genetic algorithm 110, which may delay convergence of the genetic algorithm 110 on a reliable and high-performing neural network that models the input data set 102. In accordance with the present disclosure, to "help" the genetic algorithm 110 arrive at a solution faster, during each epoch (evaluation) one or more models in the genetic algorithm 110 may be evaluated using the trained classifier 101.” (Husain, [0017]) “The trained classifier 101 may process the normalized vector 103 and output data indicating an expected reliability or performance 105 of the first neural network. The "expected reliability or performance" of a neural network may, in some examples, be represented using integer, floating point, Boolean, enumerated, or other values. Uninitialized and untrained neural networks have low expected reliability and/or are expected to perform poorly. The trained classifier 101 may provide the data indicating the expected reliability or performance 105 of the first neural network to the genetic algorithm 110.” (Husain, [0018]) While Husain fails to disclose the further limitations of the claim, Tan discloses an apparatus, wherein one or more of the at least one processor circuit is to label candidate neural networks associated with highest and lowest pareto metrics to facilitate convergence of the machine learning model in subsequent network evaluations: “a model is called Pareto optimal if either it has the highest accuracy (pareto metric) without increasing latency (pareto metric) or it has the lowest latency (pareto metric) without decreasing accuracy (pareto metric). Given the computational cost of performing architecture search, example implementations of the present disclosure focus more on finding multiple Pareto-optimal solutions in a single architecture search” (Tan, [0045]) PNG media_image2.png 423 855 media_image2.png Greyscale (Tan, Figure 5A) PNG media_image3.png 419 867 media_image3.png Greyscale (Tan, Figure 5B) “FIGS. 5A and 5B show the multi-objective search results for typical α and β. FIG. 5A shows the Pareto curve (dashed line) for the 1000 sampled models (dots) (candidate neural networks)” (Tan, [0079]) “The architecture search computing system 150 includes one or more processors 152 and a memory 154.” (Tan, [0104]) Tan relates to neural architecture searches and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Husain to use Pareto metrics to evaluate candidate networks, as disclosed by Tan. By finding Pareto-optimal models over accuracy and latency, both measures of performance are jointly optimized in a high-ranking model. Additionally, Pareto solutions can be easily and effectively approximated with known methods, and multiple Pareto-optimal models can be selected in a single round of searching. See Tan, paragraphs [0045-0046]. Regarding claim 29, the rejection of claim 28 in view of Husain and Tan is incorporated. Tan, in combination with Husain, further discloses an apparatus, wherein the highest and lowest pareto metrics identify relative values of architecture characteristics, the architecture characteristics corresponding to at least one of an accuracy or a latency: (Tan) “a model is called Pareto optimal if either it has the highest accuracy without increasing latency or it has the lowest latency without decreasing accuracy. Given the computational cost of performing architecture search, example implementations of the present disclosure focus more on finding multiple Pareto-optimal solutions in a single architecture search” (Tan, [0045]). Tan relates to neural architecture searches and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Husain to use Pareto metrics to evaluate candidate networks, as disclosed by Tan. By finding Pareto-optimal models over accuracy and latency, both measures of performance are jointly optimized in a high-ranking model. Additionally, Pareto solutions can be easily and effectively approximated with known methods, and multiple Pareto-optimal models can be selected in a single round of searching. See Tan, paragraphs [0045-0046]. Regarding claim 31, the rejection of claim 10 in view of Husain is incorporated. While Husain fails to disclose the further limitations of the claim, Tan discloses an apparatus, wherein the dataset type is at least one of a CIFAR-10 dataset or an ImageNet dataset: “The present disclosure also includes example experimental results which show that example implementations of the present disclosure generate new network architectures that are able to consistently outperform state-of-the-art mobile CNN models across multiple vision tasks. As one example, on the ImageNet classification task, an example model generated using the search techniques described herein achieves 74.0% top-1 accuracy” (Tan, [0038]) “Directly searching for CNN models on large tasks like ImageNet or COCO is prohibitively expensive, as each model takes days to converge. Thus, the example architecture search experiments were conducted on a smaller proxy task, and then the top-performing models discovered during architecture search were transferred to the target full tasks. However, finding a good proxy task for both accuracy and latency is non-trivial: one has to consider task type, dataset type, input image size and type. Initial experiments on CIFAR-10 and the Stanford Dogs Dataset (Khosla et al. 2018) showed that these datasets are not good proxy tasks for ImageNet when model latency is taken into account.” (Tan, [0068]) Tan relates to neural architecture searches and is analogous to the claimed invention. Husain teaches an apparatus that performs a neural architecture search for models executing on some data type(s). Tan teaches an apparatus that performs a neural architecture search for models using the ImageNet or CIFAR-10 datasets. It would have been obvious to one of ordinary skill in the art to combine Husain and Tan by searching for models trained on ImageNet or CIFAR-10 using Husain’s method. This would achieve the predictable result of finding optimal model(s) for performing image-related tasks, with Husain’s method and Tan’s image datasets performing the same together as they did separately. (MPEP 2143 I. (A) Combining prior art elements according to known methods to yield predictable results). Response to Arguments The following responses address arguments and remarks made in the instant remarks dated 12/01/2025. Objections The Examiner notes that objections to the claims have been made in light of the instant amendments. 101 Rejections On pages 9-11 of the instant remarks, the Applicant argues that limitations of claim 1 cannot be performed mentally: “Step 2A - Claim 1 is Not Directed to an Abstract Idea Independent claim 1 is not directed to an abstract idea and, thus, satisfies step 2A of the test enunciated in Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014). To identify a judicial exception under the first prong of Step 2A, the 2019 Revised Patent Subject Matter Eligibility Guidance (hereinafter "2019 Revised Guidance") sets forth three groupings of activities in which an abstract idea can be found. The three groupings include: mathematical concepts, certain methods of organizing human activity, and mental processes. These groupings do not apply to any of the claims of the instant application. The Office Action also alleges that claim 1 recites a metal process practically performed in the human mind. For example, the Office Action alleges that claim 1 is directed to a "Mental Process" (see Office Action, p. 3). However, claim 1 does not recite a mental process because the steps are not practically performed in the mind. The apparatus of claim 1 is used for "establishing search efforts using starting conditions that have a relatively higher probability of being relevant" for the NAS such that there is a "particular performance improvement (e.g., improved accuracy, improved speed, improved (e.g., lower) power consumption, etc.) that satisfies a threshold change from a prior NN architecture configuration" (see Originally Filed Specification at paras. [0021], [0023], emphasis added). As part of determining a possible prior occurrence of particular environmental conditions, querying of a "network knowledge database 118 [that] includes, but is not limited to latency information 202, hardware representation information 204, network weight information 206, extracted patterns and/or information from a network analyzer (e.g., to facilitate similar NAS searches in the future for expedited search results) and probability distribution information 232." (Id at para. [0028]). Claim 1 includes means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks. Such assessment of starting conditions for a NAS based on a large amount of data including historical and operational information cannot be practically performed in the mind. Furthermore, the apparatus disclosed herein also allows for the training of a machine learning model using "obtained feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model." (Id at para. [0020], emphasis added). For example, the apparatus disclosed herein "extracts any number of patterns using one or more of classical machine learning algorithms, deep neural network trained algorithms ( e.g., in a semi-supervised or un-supervised manner), rule-based algorithms, etc. (block 328)" (Id at para. [0061], emphasis added), as recited in connection with claim 1. The training of a machine learning model includes modifications of data structures associated with memory that cannot be practically performed in the human mind as the respective formats and quantities of the data would be impossible for a human mind to process, even with the help of pencil and paper. For example, like the patent eligible claim 2 of Example 37 (Relocation of Icons on a Graphical User Interface) provided by the USPTO, claim 1 does not recite a mental process that can be practically performed in the human mind. For example, the "determining step" of claim 2 of Example 37 requires action by a processor that cannot be practically applied in the mind. In particular, claim 2 includes "determining the amount of use of each icon using a processor that tracks how much memory has been allocated to each application associated with each icon over a predetermined period of time," and is found to not be practically performed in the human mind because this determining step requires a processor accessing computer memory. Similarly, claim 1 disclosed herein includes the training of a machine learning model that requires the accessing of computer memory, given that training machine learning models is memory-intensive (e.g., since the model's parameters and the data used for training need to be stored in memory). Therefore, claim 1 does not recite a mental process because it does not contain limitations that can practically be performed in the human mind and/or the human mind is not equipped to perform the claim limitations. As such, improving NAS as disclosed in connection with claim 1 by, inter alia, means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks, means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of a dataset type and a task type, means for identifying features to identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model is not simply associated with "mental processes", as alleged in the Office Action (p. 3). Therefore, claim 1 does not recite a mental process because it does not contain limitations that can practically be performed in the human mind and/or the human mind is not equipped to perform the claim limitations.” In response to applicant's arguments above, it is noted that the starting conditions, querying of environmental conditions, and training based on feedback upon which the Applicant relies (instant specification paragraphs [0020-0023], [0027-0028]) are not recited in amended claim 1. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The Examiner respectfully disagrees that claim 1, as amended, recites no mental processes. As stated in MPEP 2106.04(a)(2)(III), The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 … Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortgage Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324, 117 USPQ2d 1693, 1699 (Fed. Cir. 2016) (holding that computer- implemented method for "anonymous loan shopping" was an abstract idea because it could be "performed by humans without a computer"). Claim 1 recites limitations amounting to mental processes performed on generic data structures. Generic data structures amount to generic computer components and are insufficient to render a mentally performable task non-abstract. For example, claim 1 recites the limitation “means for determining a dataset type and a task type associated with the target platform”, reciting a mental process of “determining a dataset type and a task type associated with the target platform” performed by “means” of an apparatus, a generic machine, to render the limitation non-abstract. Claim 2 of Example 37 (Relocation of Icons on a Graphical User Interface) differs substantially from the claimed invention of the instant application. Claim 2 of Example 37 was found not to recite a judicial exception, not simply because a generic computer processor was executing its second limitation, but because a computer processor was required to execute elements of the limitation that could not be performed as a mental process: “In particular, the claimed step of determining the amount of use of each icon by tracking how much memory has been allocated to each application associated with each icon over a predetermined period of time is not practically performed in the human mind, at least because it requires a processor accessing computer memory indicative of application usage.” (USPTO Subject Matter Eligibility Examples 37 to 42, published 1/7/2019, Example 37, Claim 2, Step 2A Analysis). The Examiner asserts that claim 1, as amended, recites mental processes, and maintains its rejection on the basis of the Alice/Mayo tests performed (See 101 rejections section for more detail). No rejections under 35 U.S.C. 101 are withdrawn on this basis. On pages 11-12 of the instant remarks, the Applicant argues that the claimed invention is integrated into a practical application through improvements to NAS searches: “Lastly, claim 1 is grounded in a practical application of improving neural network (NN) architecture searches associated with a target platform by establishing starting search criteria and identifying "particular NN architecture parameters that are known to be ineffective and/or otherwise cause poor NN performance ... [allowing] such parameters [to be] labeled to aid in machine learning analysis by a network analyzer" (see Originally Filed Specification, para. [0023]). Claim 1 applies the claimed elements in a meaningful way, which makes the claim, as a whole, more than a mere drafting effort that monopolizes a judicial exception. For example, claim 1 improves the neural architecture search based on, inter alia, means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks, means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type, extracting patterns from the one or more of the candidate neural networks based on the features using a machine learning model and means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform. Accordingly, because claim 1 is patent eligible under Step 2A of the Alice test, there is no need to proceed to Step 2B. Withdrawal of the 35 U.S.C. § 101 rejections of independent claim 1 and all claims depending therefrom is respectfully requested.” In response to applicant's arguments above, it is noted that the identification of ineffective or poorly performing parameters upon which the Applicant relies (instant specification paragraph [0023]) is not recited in amended claim 1. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The improvement of a claimed invention must be sufficiently detailed, as noted in MPEP 2106.05(a): “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art … After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (Fed. Cir. 2016) (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification.” While the Applicant has pointed toward known technical problems and alleged solutions described by the instant specification, these solutions are not represented in the claims. Thus, the claimed invention is not found to be integrated into a practical application on the basis of technological improvement, and no rejections are withdrawn on these grounds. On pages 12-14 of the instant remarks, the Applicant argues that the claimed invention improves on the field of NAS searches, and thus amounts to significantly more than any recited abstract ideas: “Step 2B - Claim 1 Amounts to Significantly More than an Abstract Idea Even assuming an abstract idea is present in claim 1 and claim 1 is not directed to a practical application, which are points not conceded by the Applicant, the combination of elements of claim 1 "[a]dds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field." (2019 Revised Guidance, p. 56). For example, claim 1 of the instant application sets forth an apparatus including means for determining candidate neural networks for a neural architecture search corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate neural networks, means for determining a dataset type and a task type associated with the target platform, means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type, means for identifying features to identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model, and means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform. The particular arrangement of elements in claim 1 of the instant application results in a technical improvement in the field of neural architecture searches and, in particular, with respect to identification of neural networks that can be executed on a target platform. In fact, the instant application discloses multiple examples of how the claimed subject matter provides improvements over known methods for neural architecture searches. For example, "[e]fforts to perform a NAS to identify optimized and/or otherwise candidate NN architectures (or particular combinations ofNN architecture characteristics) are typically focused on a specified hardware platform (e.g., a target platform type) for a given task ... [but] such efforts do not consider particular NN architecture parameters and/or characteristics that could be relevant prior to beginning the search effort" (see Originally Filed Specification at para. [0022], emphasis added). For example, " [ w ]hile typical NAS techniques do not consider the granularity of particular features and/or combinations of features as inputs to a network analyzer, examples disclosed herein extract such granularity from historical information related to both past success and past failures with regard to performance metrics" (Id at para. [0038], emphasis added). As such, methods and apparatus disclosed herein allow for "selecting architectures that have a relatively higher probability of being relevant in the search effort will ultimately reduce the search duration and improve the accuracy thereof' (Id at para. [0056], emphasis added). Using methods and apparatus disclosed herein, "candidate architectures to be considered [go] ... through a degree of vetting or further consideration in view of information (e.g., clues) that traditional NAS efforts do not consider prior to instantiating computationally intense and lengthy search efforts" (Id at para. [0059], emphasis added). For example, the apparatus disclosed herein "labels all candidate architecture characteristic combinations based on relative performance ... such as their relative performance in view of performing a task in view of accuracy, latency, power consumption, etc. [and] labeling is applied for both the best performing (e.g., architectures that exhibit the relative highest performance metrics) and the worst performing (e.g., architectures that exhibit the relative lowest performance metrics) so that future machine learning modeling can more quickly converge on relevant solutions" (Id, emphasis added). These technological improvements are expressly incorporated into claim 1, which sets forth, inter alia, means for identifying features to identify features associated with the one or more of the candidate neural networks, the features including one or more performance indicators and extract patterns from the one or more of the candidate neural networks based on the features using a machine learning model, and means for selecting, based on the extracted patterns, a candidate neural network from the one or more of the candidate neural networks, the selected candidate neural network to be executed with the target platform. As such, claim 1 reduces the NAS duration and improves the accuracy of search results by extracting patterns from candidate neural networks and identifying networks most suitable for execution on a target platform of interest. Accordingly, claim 1 provides a distinct improvement over known methods for NAS across multiple platforms. Accordingly, independent claim 1 and all claims depending therefrom are directed to statutory subject matter in compliance with 35 U.S.C. § 101 under step 2A and 2B of the Alice test.” In response to applicant's arguments above, it is noted that the identification of particular model parameters associated with performance of a prior search, selecting architectures with a higher probability of search effort, and labeling highest and lowest performing models, upon which the Applicant relies (instant specification paragraphs [0022], [0038], [0056], [0059]) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). While the Applicant has pointed toward known technical problems and alleged solutions described by the instant specification, these solutions are not represented in the claims. Thus, the claimed invention is not found to be integrated into a practical application on the basis of technological improvement, and no rejections are withdrawn on these grounds. On page 14 of the instant remarks, the Applicant argues that other independent claims and all dependents should be allowed under 35 U.S.C. 101: “Likewise, independent claims 10 and 19, and all claims depending respectively therefrom, set forth patent eligible subject matter under the 2019 Revised Guidance. Therefore, withdrawal of the § 101 rejections of independent claims 1, 10, and 19, and all claims depending respectively therefrom, is requested.” As argued previously, rejections of claim 1 under 35 U.S.C. 101 have not been withdrawn. 101 rejections for substantially similar independent claims 10 and 19 are maintained under similar reasoning. Thus, no dependent claim rejections are withdrawn on these grounds. 102 / 103 Rejections On pages 14-15 of the instant remarks, the Applicant argues that Husain and Wilson fail to disclose identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type, as set forth in claim 1: “Independent Claim 1 Claim 1 sets forth an apparatus including means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type. The alleged Husain/Wilson combination fails to teach or suggest such a computer-implemented method. Husain mentions adjusting automated neural network generation based on evaluation of candidate neural networks (see Abstract). Husain also mentions that "each epoch of the genetic algorithm may produce a particular number of candidate neural networks based on crossover and mutation operations that are performed on the candidate neural networks of a preceding epoch" (see para. [0005]). However, Husain does not teach or suggest an apparatus including means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving~ combination of the dataset type and the task type, as set forth in claim 1. Wilson does not supply the elements missing from Husain. Wilson mentions systems and methods for constructing and applying synaptic networks (see Abstract). Wilson also mentions "generating recommendations for users based on learned relationships between nodes of a synaptic network where the nodes represent users, items, and attributes that describe the users and items" (see para. [0008]). However, Wilson does not teach or suggest an apparatus including means for identifying one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type, as set forth in claim 1. Because each of Husain and Wilson are missing the same elements of claim 1, the alleged Husain/Wilson combination is missing those same elements of claim 1. Therefore, the Husain/Wilson combination fails to establish a prima facie case of obviousness of the apparatus of claim 1. Withdrawal of the § 103 rejections of claim 1 and all claims dependent thereon is respectfully requested. Independent Claim 10 Claim 10 sets forth an apparatus to identify candidate networks, including at least one processor circuit to be programmed by the machine readable instructions to identify one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type. As described in connection with claim 1, Husain does not teach or suggest the apparatus of claim 10. Thus, withdrawal of the § 102 rejections of claim 10 and all claims dependent thereon is respectfully requested. Independent Claim 19 Claim 19 sets forth a non-transitory machine readable storage medium comprising instructions that, when executed, cause processor circuitry to at least identify one or more of the candidate neural networks associated with a prior neural architecture search involving a combination of the dataset type and the task type. As described in connection with claim 1, the Husain combination does not teach or suggest the instructions of claim 19. Thus, withdrawal of the § 103 rejections of claim 19 and all claims dependent thereon is respectfully requested.” Regarding the Applicant’s arguments above, the Examiner respectfully disagrees. Husain discloses a classifier which labels candidate neural networks during the architecture search (Husain, [0069]). These labels can encompass many types of information, including, among many other types of labels, a type of data used by the network, a type of (task) analysis to be performed by the network, or a combination of these different types of information (Husain, [0070]). This classifier is trained using these labels to produce labels of the same information type(s) during inference (Husain, [0068]). In other words, the classifier can be trained based on a combination of task type and dataset type information. During the architecture search, the trained classifier is used to determine a set of candidate networks which are expected to provide reliable and / or high-performing results (Husain, [0004). The Examiner contends that Husain fully discloses all limitations of amended claim 1, as detailed further in the 102 rejections section. Similar reasoning is applicable to independent claims 10 and 19. Thus, no rejections are withdrawn on this basis. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Tan et al. (Compound model scaling for neural networks, published 7/23/2020, US 2020/0234132 A1) discloses a method of modifying a baseline architecture obtained from remote data sources to identify a series of candidate network architectures through benchmark performance tests. Jia et al. (MANAGING MACHINE LEARNING FEATURES, published 4/1/2021, US 20210097329 A1) teaches a method of gauging feature importance for machine learning models, and categorizing data by feature performance ranges Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron P Gormley whose telephone number is (571)272-1372. The examiner can normally be reached Monday - Friday 12:00 PM - 8:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AG/Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jun 23, 2022
Application Filed
Aug 01, 2022
Response after Non-Final Action
Aug 27, 2025
Non-Final Rejection — §101, §102, §103
Nov 14, 2025
Applicant Interview (Telephonic)
Nov 14, 2025
Examiner Interview Summary
Dec 01, 2025
Response Filed
Jan 13, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585955
Minimal Trust Data Sharing
2y 5m to grant Granted Mar 24, 2026
Patent 12579440
Training Artificial Neural Networks Using Context-Dependent Gating with Weight Stabilization
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
0%
With Interview (-60.0%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month