Prosecution Insights
Last updated: April 19, 2026
Application No. 17/649,277

NEURAL NETWORK SYNTHESIZER

Non-Final OA §101§103§112
Filed
Jan 28, 2022
Examiner
FACCENDA, GISEL GABRIELA
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
3 (Non-Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
9 granted / 16 resolved
+1.3% vs TC avg
Strong +49% interview lift
Without
With
+49.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
24 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The office action is responsive to the amendment filed on 05/13/2025. As directed by the amendments claim 1-20 are pending and stand rejected. Claims 1, 4, 5, 13, and 16 are amended. Claims 3, 19, and 20 are canceled. Claims 21 and 22 are new. Response to Amendment Regarding Objection to the Drawings: Applicant’s amendment to the specification, see pg. 2-3, filled 05/13/2025, with respect to the objection to the drawings because they include the following reference character(s): 502, 504, 506, 508, 512, and 516 not mentioned in the description have been fully considered and are persuasive. The objection to the drawings has been withdrawn. Regarding Claim Objection: Applicant’s amendment to the claims, see pg. 7, filed 05/13/2025, with respect to claim 16 being object due to minor informalities have been fully considered and are persuasive. The objection of claim 16 has been withdrawn. Response to Arguments Regarding Claim Rejections Under 35 U.S.C. § 112 : Applicant’s arguments, see pg. 11, filed 05/13/2025, with respect to claims 13 and 19 rejected under 35 U.S.C. § 112 have been fully considered and are persuasive. The rejection under 35 U.S.C. § 112 of claims 13 and 19 has been withdrawn. Regarding Rejections Under 35 US.C. § 101:Applicant's arguments filed p. 11-13 filled on 05/13/2025 have been fully considered but they are not persuasive. APPLICANT ARGUMENT: Applicant argues, “currently amended claim 1 provides a solution in which functionally equivalent network components from a set of trained neural networks are identified and evaluated, then composed in dependence on their respective performance to synthesize a neural network in accordance with a given set of constraints. In this way, a novel neural network may be obtained for performing a given task using specific hardware, without needing to train the novel neural network from scratch... In view of the above amendments, it is respectfully submitted that currently amended claim 1 should be found patent eligible for at least the following reasons: (i). The claim recites feature(s) that are not merely mathematical, mental or observational aspects, e.g. synthesizing a neural network that is constrained to run on the specific hardware of the given device. Therefore, the claim does not recite abstract ideas. (ii) The claim integrates any alleged abstract ideas into a practical application, e.g., enabling the given device to perform the task and/or improving the functioning of the given device to perform the task. (iii) the claim recites significantly more than any alleged abstract idea judicial exception. It is further submitted that dependent claims 4-18, and new independent claims 21 and 22, should be found patent eligible for at least the same reasons as claim 1. EXAMINER RESPONSE: Examiner respectfully disagree, applicant argument is not persuasive. Amended independent claim 1 is rejected under 35 US.C. § 101 because the claims recites the limitation of : inspecting the trained neural networks in the set to identify a plurality of functional blocks, each identified functional block being common to at least some of the trained neural networks in the set, wherein inspecting the trained neural networks in the set comprises: comparing activations of network layers between the trained neural networks when processing a common test data item; and identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another; for each identified functional block: extracting a respective network component for implementing the identified functional block within each of at least some of the trained neural networks; and for each extracted network component: evaluating performance of the network component when processing the test data; and composing a plurality of network components in accordance with the stored configuration data and in dependence on the performance data and the given set of constraints, thereby to synthesize a neural network for performing said task with the given device that recite a mental process of evaluation which can be all performed by the human mind with the aid of pen and paper. For example, a human can inspect a set of neural networks to identify a plurality of functional block common to a plurality of neural network, compare output of layers, identify contiguous groups of layers, identify functional blocks, extract network components for implementing the functional block, evaluate performance of the network component and composing network components based on selecting elements or parameter for a machine leaning model. Therefore, the additional elements of: A computer-implemented method comprising: obtaining a set of trained neural networks for performing a common task and test data for evaluating the performance of the trained neural networks in the set when performing said task; storing performance data indicating said performance of the network component when processing the test data; storing configuration data indicating a configuration of the identified plurality of common functional blocks within said plurality of the trained neural networks; receiving a request to synthesize a neural network for performing said task subject to a given set of constraints, wherein the given set of constraints is based at least in part on specific hardware of a given device; and as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Though, the applicant assert, the “claims recites feature(s) that are not merely mathematical, mental or observational aspects” when view as whole amended claim 1 recites abstract ideas of inspecting a model, comparing activation layer of a model, identify contiguous groups of layers, identify functional blocks, extracting neural components and composing neural component to synthesize a neural network. Further, amended claim 1 as a whole does not integrate the exception into a practical application under the second prong of the two-prong analysis since the claimed invention do not improves the functioning of a computer or improves another technology or technical field. Rather the claim recites additional element of: A computer-implemented method comprising: obtaining a set of trained neural networks for performing a common task and test data for evaluating the performance of the trained neural networks in the set when performing said task storing performance data indicating said performance of the network component when processing the test data; storing configuration data indicating a configuration of the identified plurality of common functional blocks within said plurality of the trained neural networks; receiving a request to synthesize a neural network for performing said task subject to a given set of constraints, wherein the given set of constraints is based at least in part on specific hardware of a given device; and That merely recites the words "apply it" (or an equivalent) as discussed in MPEP § 2106.05(f) and it adds insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g) which the courts have also identified limitations that did not integrate a judicial exception into a practical application (see MPEP 2106.04(d)(1)). Independent claims 21 and 22 recites similar feature of claim 1, for that reason same rational applies. Further, dependent Claims 4-18 are also rejected under 35 USC § 101 (See detailed office action Claim Rejections - 35 USC § 101). Regarding Rejections Under 35 US.C. § 103: Applicant's arguments filed p. 13-17 filled on 05/13/2025 have been fully considered but they are not persuasive. APPLICANT ARGUMENT #1: Applicant argues pg. 13-15, “The search strategy proposed by YU is completely different from inspecting trained neural networks to identify common functional blocks, which are portions of the neural networks that perform a common function, or in other words portions that produce substantially consistent outputs for a given input. Such functional blocks may be identified by analyzing the activations of network layers when processing test data. To more clearly reflect this distinction, claim 1 is currently amended to recite: "wherein inspecting the trained neural networks in the set comprises: comparing activations of network layers between the trained neural networks when processing a common test data item; and identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another" A functional block that is identified as common to a subset of the neural networks may be implemented by different network components within the various neural networks in the subset. In other words, different neural networks may use different components to implement a given function. In YU, the subsets of the shared set of parameters are not selected on the basis of performing a common function but are merely selected from a common search space. Accordingly, selecting from the shared set of parameters cannot be equated to "extracting a respective network component for implementing the identified functional block within each of at least some of the trained neural networks" as recited in currently amended claim 1. In summary, YU does not disclose at least the following limitations of currently amended claim 1: (i) inspecting the trained neural networks in the set to identify a plurality of functional blocks, each identified functional block being common to at least some of the trained neural networks in the set (ii) wherein inspecting the trained neural networks in the set comprises: comparing activations of network layers between the trained neural networks when processing a common test data item; and identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another (iii) [for each identified functional block] extracting a respective network component for implementing the identified functional block within each of at least some of the trained neural networks. None of the cited documents teach or suggest the above distinguishing features of currently amended claim 1 over YU. In particular, paragraph 0058 of KIM discusses identifying shared layer groups that have similar structures "by considering types and the number of respective layers of the layer groups, a kernel size, the number of input samples, the number of output samples, and the like". However, determining shared layer groups based on static features of neural networks cannot be equated to identifying functional blocks, and indeed it is not possible to determine whether layer groups correspond to a common functional block by considering static features. This contrasts with currently amended claim 1, which identifies common functional block by analyzing activations when the neural networks are used to process test data. Accordingly, KIM does not disclose at least limitations (i) and (ii) above. It is therefore submitted that currently amended claim 1 is non-obvious in view of the cited documents. It is further submitted that new claims 21 and 22 are non-obvious at least for the same reasons as claim 1. EXAMINER RESPONSE #1: Examiner respectfully disagree. Amended claim 1 do not disclose how or what is the functional block. Amended claim 1 as presented do not specify what constitute the functional block, rather it discloses identification of functional block by inspecting and discloses how the inspection is being performed. Further, amended claim 1 lacks any positive of clear recitation of what identified information or inspection result is the functional block. Thus, the claims are given their broadest reasonable interpretation as the shared parameters being the functional block since the shared parameters modify the structure of the neural network (MPEP 2111.01(III)). For the reason stated above, YU [0024] teaches limitation (i) by disclosing the “parameters” are selected from the shared set according to the neural networks architecture, that is the parameters are selected on the basis of performing a common function since these are dependent on the architecture of the neural networks. Further, Yu teaches limitation (iii) in [0052] where each set of parameters (functional blocks) are a subset of a shared set of parameters (network component) and in [0056] by stating “each of the plurality of neural networks has parameters that are a subset of the shared set and each of the plurality of neural networks has a respective architecture selected from a search space of different architectures that is defined by a respective set of possible values for each of a plurality of architectural dimensions”. Although, applicant assert “determining shared layer groups based on static features of neural networks cannot be equated to identifying functional blocks, and indeed it is not possible to determine whether layer groups correspond to a common functional block by considering static features”, the examiner will like to emphasize that Kim reference was not applied to teach or suggest limitation (i). Rather, in the office action date 02/18/2025, Kim reference was applied to teach limitation (ii), for which Kim paragraph [0058] and [0052] teaches comparing activations of network layers between the trained neural networks when processing a common test data item; and identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another. Therefore, Yu and Kim in combination do disclose the limitations of amended claim 1. Amended claim 1 is obvious in view of the cited references. Claims 21 and 22, are similar to claim 1, hence are obvious at least for the same reason. APPLICANT ARGUMENT #2: Applicant argues pg. 15-17, dependent claims 5, 6-9-12, 17 depends from and therefore includes the limitations of amended independent claim 1. “Accordingly, it is respectfully submitted that claim 5 is also patentable over the art of record for at least the reasons set forth above with respect to independent claim 1”. Further, “Claim 4 was rejected under 35 U.S.C. 103 as being unpatentable over Yu, Kim in further view of Xue et al. US 20220198260 Al (hereinafter Xue). Claim 4 is cancelled”. EXAMINER RESPONSE #2: Examiner respectfully disagree, dependent claims 5, 6-9-12, 17 depends on claim 1, therefore the rejection of claim 1 is incorporated. Claims 5, 6-9-12, 17 are not patentable over the art of record for at least the reasons set forth above with respect independent claim 1. Furthermore, it appears there is a typographic error, claim 4 was not cancelled and its presented to examiner for examination. Nonetheless, dependent claim 4, is not patentable over the art of record for at least the reasons set forth above with respect independent claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more and Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to signal per se. Step 1: Claims 1, 4-18 are a method type claim. Claim 21 is a non-transitory storage media type of claim. Claim 22 is a system type claim. Therefore, 1, 4-18, and 21-22 are directed to either a process, machine, manufacture or composition of matter. Regarding claim 1: 2A Prong 1: inspecting the trained neural networks in the set to identify a plurality of functional blocks, each identified functional block being common to at least some of the trained neural networks in the set... (mental process – of inspect a set of neural networks to identify a plurality of functional block common to a plurality of neural network can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). comparing activations of network layers between the trained neural networks when processing a common test data item; and ( mental process – of comparing outputs of layers can be performed by the human mind with the aid of pen and paper (e.g., evaluation)). identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another; ( mental process – of identifying contiguous groups of layers can be performed by the human mind with the aid of pen and paper (e.g., evaluation)). for each identified functional block: (mental process – of identify functional block can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). extracting a respective network component for implementing the identified functional block within each of at least some of the trained neural networks; and for each extracted network component: (mental process – of extracting network component can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). evaluating performance of the network component when processing the test data; and (mental process – of evaluating performance of the network component can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). composing a plurality of network components in accordance with the stored configuration data and in dependence on the performance data and the given set of constraints, thereby to synthesize a neural network for performing said task with the given device (mental process – of composing network components based on selecting elements or parameter for a machine learning model can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A computer-implemented method comprising: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). obtaining a set of trained neural networks for performing a common task and test data for evaluating the performance of the trained neural networks in the set when performing said task; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). ...wherein inspecting the trained neural networks in the set comprises: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). storing performance data indicating said performance of the network component when processing the test data; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). storing configuration data indicating a configuration of the identified plurality of common functional blocks within said plurality of the trained neural networks; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). receiving a request to synthesize a neural network for performing said task subject to a given set of constraints, wherein the given set of constraints is based at least in part on specific hardware of a given device; and (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: A computer-implemented method comprising: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). obtaining a set of trained neural networks for performing a common task and test data for evaluating the performance of the trained neural networks in the set when performing said task; ( This is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). ...wherein inspecting the trained neural networks in the set comprises: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). storing performance data indicating said performance of the network component when processing the test data; ( This is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). storing configuration data indicating a configuration of the identified plurality of common functional blocks within said plurality of the trained neural networks; ( This is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). receiving a request to synthesize a neural network for performing said task subject to a given set of constraints, wherein the given set of constraints is based at least in part on specific hardware of a given device; and ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 21: is rejected under the same rational of claim 1. Claim 21 only recites the additional elements of One or more non-transitory storage media comprising computer-readable instructions which, when executed by one or more processors, cause the one or more processors to carry out a method comprising... which is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f). Regarding claim 22: is rejected under the same rational of claim 1. Claim 22 only recites the additional elements of A system comprising: at least one processor; and at least one non-transitory storage media comprising computer-readable instructions which, when executed by the at least one processor, cause the at least one processor to carry out operations comprising... which is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f). Regarding claim 4: 2A Prong 1: None. 2A Prong 2 and 2B: wherein comparing the activations of the network layers between the trained neural networks is performed on the basis of a random search or a grid search (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Regarding claim 5: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein comparing the activations of the network layers between the trained neural networks uses meta-learning (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Regarding claim 6: 2A Prong 1: None. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: further comprising, for at least one identified functional block, processing the extracted network components for implementing the functional block, using machine learning, to generate one or more further network components for implementing the functional block, wherein the composed plurality of network components includes at least one of the generated further network components (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Regarding claim 7: 2A Prong 1: further comprising, for said at least one functional block: evaluating performance of the one or more further network components when processing the test data; and (mental process – of evaluating performance of the network component can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: storing further performance data indicating said performance of the one or more further network components when processing the test data, (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). This is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). wherein the composing of the plurality of network components is further in dependence on the stored further performance data (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 8: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein generating the one or more further network components is performed in response to receiving the request for the neural network (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Regarding claim 9: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein the processing of the extracted network components using machine learning uses neural architecture search (This is directed to restricting the abstract idea to a Particular Technological Environment. See MPEP 2106.05(h)). Regarding claim 10: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein the processing of the extracted network components using machine learning comprises training a generative model to generate the further network components (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Regarding claim 11: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein said training comprises adversarial training (This is directed to restricting the abstract idea to a Particular Technological Environment. See MPEP 2106.05(h)). Regarding claim 12: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein said processing of the extracted network components using machine learning uses knowledge distillation or model compression (This is directed to restricting the abstract idea to a Particular Technological Environment. See MPEP 2106.05(h)).Regarding claim 13: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein composing the plurality of network components selecting a plurality of the extracted network components, using the stored performance data, for compliance with the given set of constraints (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 14: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: wherein the given set of constraints includes at least one of an accuracy constraint, a memory constraint, a processing operation constraint, an execution time constraint, a latency constraint, and an energy consumption constraint (The specification of data to be stored is understood to be a field of use limitation. See MEPE 2106.05(h)). Regarding claim 15: 2A Prong 1: wherein the request indicates an order of priority for the given set of constraints; and (mental process – of indicating the order of priority for a request can be performed by the human mind with the aid of pen and paper (e.g., judgement)). the composing of the plurality of network components is dependent on the indicated order of priority for the given set of constraints (mental process – of composing the network components based on the indicated order of priority can be performed by the human mind with the aid of pen and paper (e.g., evaluation)). 2A Prong 2 and 2B: None. Regarding claim 16: 2A Prong 1: inspecting the trained neural networks in the one or more further sets to identify that at least some of the plurality of functional blocks are common to at least some of the trained neural networks in the first set and the one or more further sets, (mental process – of inspect a set of neural networks to identify a plurality of functional block common to a plurality of neural network can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the set of trained neural networks is a first set, the method further comprising: (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)). obtaining one or more further sets of trained neural networks, the trained neural networks in each further set configured to performing a respective further common task; and (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). wherein the composed plurality of network components includes at least one network component derived from the trained neural networks in the one or more further sets (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 17: 2A Prong 1: wherein the request for a neural network is a first request, the method further comprising: (mental process – of stating the request is a first request can be performed by the human mind with the aid of pen and paper ( e.g., opinion)). composing a further plurality of network components, thereby to synthesize a further neural network in accordance with the received request (mental process – of composing further plurality of network components can be performed by the human mind with the aid of pen and paper ( e.g., evaluation)). 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception Additional elements: receiving a further request for a neural network for performing a further task subject to a further given set of constraints, wherein the trained neural networks in the one or more further sets are configured to perform said further task; and (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)). The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 18: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: further comprising training the synthesized neural network using machine learning (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 13-16, 18 and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over YU et al. US 2022/0405579 A1 (hereinafter Yu) in further view of KIM et al. US 2021/0264237 A1 (hereinafter Kim). Regarding claim 1: Yu teaches A computer-implemented method comprising: (Yu [0018] teaches a computer implemented method). obtaining a set of trained neural networks for performing a common task and test data for evaluating the performance of the trained neural networks in the set when performing said task; (Yu [0021] teaches obtaining training data (test data) that includes a “set of neural network inputs and, for each network input, a respective target output that should be generated by the neural network to perform the particular task”. Furthermore, Yu [0024] teaches a plurality of neural networks (trained neural networks) and [0026] teaches a system can determine a neural network that performs the machine learning task within a specified set of resource constraints). inspecting the trained neural networks in the set to identify a plurality of functional blocks, each identified functional block being common to at least some of the trained neural networks in the set, wherein inspecting the trained neural networks in the set comprises: (Yu [0024] teaches a plurality of neural networks that each have architectures and teaches “each of the neural networks has parameters (functional blocks) that are a subset of the shared set, with different neural networks having different subsets of the shared set”). for each identified functional block: extracting a respective network component for implementing the identified functional block within each of at least some of the trained neural networks; and (Yu [0052] teaches each set of parameters (functional blocks) are a subset of a shared set of parameters (network component) and [0056] teaches “each of the plurality of neural networks has parameters that are a subset of the shared set and each of the plurality of neural networks has a respective architecture selected from a search space of different architectures that is defined by a respective set of possible values for each of a plurality of architectural dimensions”). for each extracted network component: evaluating performance of the network component when processing the test data; and ( Yu [0070] teaches in order to “determine the respective performance benchmark for a neural network, the system can determine the accuracy or other appropriate performance measure for the machine learning task on a data set”). receiving a request to synthesize a neural network for performing said task subject to a given set of constraints, wherein the given set of constraints is based at least in part on specific hardware of a given device; and ( To clarify, Yu [0026] teaches “That is, the resource constraints 130 specify constraints on how many computational resources are consumed by the neural network when performing the task when deployed on a target set of hardware devices” and Yu [0027] teaches the system can receiving an input (request) from a user of the system that specifies the set of resource constraints or can automatically determine the set of resource constraints based on computational resources that are available to the system. Further, [0032] teaches the system can select a neural network that satisfies the constrains as the neural network to be used for performing the task). composing a plurality of network components (Yu [0032] teaches selecting a neural network that satisfies the constrains as the neural network to be used for performing the task . Further, Yu [0059] teaches “the system can select a proper subset of the possible values for each of the architectural dimensions and then determines a performance benchmark for each combination of the proper subsets that yields a neural network that satisfies the constraints” to therefore “identify a single architecture or a range of architectures that can be deployed effectively on edge devices for any given machine learning task” ([0008])). Yu does not disclose comparing activations of network layers between the trained neural networks when processing a common test data item; and identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another; storing performance data indicating said performance of the network component when processing the test data; storing configuration data indicating a configuration of the identified plurality of common functional blocks within said plurality of the trained neural networks; composing a plurality of network components in accordance with the stored configuration data and in dependence on the performance data and the given set of constraints, thereby to synthesize a neural network for performing said task with the given device. However, Kim teaches the following: comparing activations of network layers between the trained neural networks when processing a common test data item; and PNG media_image1.png 649 960 media_image1.png Greyscale (Kim [0058] teaches “the neural network analysis module may determine similar layer groups by directly comparing the layer groups” such that it can identify layers having similar structures by considering types and the number of respective layers of the layer groups, a kernel size, the number of input samples, the number of output samples, and the like”. Furthermore, to clarify, [0061] teaches “the neural network analysis module 143 of FIG. 3 analyzes the first to third neural networks NN1 to NN3” and teaches Fig. 5 the neural networks (NN1 to NN3) processing input sample (common test data item). See Fig. 5 above, with emphasize on “input sample”). identifying contiguous groups of layers within at least some of the trained neural networks having consistently alike input activations and output activations to one another; ( To clarify, Kim teaches neural network having input activations and output activations, specifically [0023] teaches the artificial neural network (ANN) having a structure in which artificial neurons are connected to process received signal and transmit the signal to other neurons, where the output of the neuron is referend to an activation. Thus, a person skilled in the relevant art will recognize that the neural network described consist of both input and output activation since the output of the neuros “activation” is a result of processing input activation through the layers of the ANN. Furthermore, Kim [0062] and FIG. 5 teaches the “neural network analysis module” element 143 can group layers having structures that are “similar (alike) to those of reference neural networks from among the layers of the first neural network NN1, and thus may determine layer groups of the first neural network NN1”. storing performance data indicating said performance of the network component when processing the test data; ; (Kim [0039] teaches storing the sharing layer group in Random Access Memory (RAM) or in the memory and [0060] teaches “the neural network analysis module” can determine the sharing layer group having an excellent performance such as “a layer group capable of performing an operation with a smaller number of nodes or layers”. Since the layer group are stored in memory, the performance data determined could also be store in memory). storing configuration data indicating a configuration of the identified plurality of common functional blocks within said plurality of the trained neural networks; (Kim [0039] teaches storing the sharing layer group in (RAM) or in the memory. This implies that the configuration data associated with the layer group will be also stored in RAM). composing a plurality of network components in accordance with the stored configuration data and in dependence on the performance data and the given set of constraints, thereby to synthesize a neural network for performing said task with the given device (Kim [0039] teaches how some layer groups are stored in RAM or in memory. This implies that their performance data and configuration data will also be store within. Furthermore, Kim [0051] teaches when the neural network device receives an operation request, it identify sharing layer group forming neural network required for the operation). Kim is also in the same field of endeavor as Yu (neural networks architectures). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of neural networks components, as being disclosed and taught by Kim, in the system taught by Yu to yield the predictable results of improve the efficiency of the storage space of an electronic device (see Kim [0080]). Regarding claim 21: is rejected under the same rational of claim 1. Claim 21 only recites the additional elements of One or more non-transitory storage media comprising computer-readable instructions which, when executed by one or more processors, cause the one or more processors to carry out a method comprising... for which Yu [0093] teaches “one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus”. Regarding claim 22: is rejected under the same rational of claim 1. Claim 22 only recites the additional elements of A system comprising: at least one processor; and at least one non-transitory storage media comprising computer-readable instructions which, when executed by the at least one processor, cause the at least one processor to carry out operations comprising..., for which Kim Fig. 4 teaches a system element 300 with a processor element 310 and Yu [0093] teaches “one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus”. Regarding claim 13: Yu and Kim teach The computer-implemented method of claim 1, Yu specifically teach wherein composing the plurality of network components comprises selecting a plurality of the extracted network components, using the stored performance data, for compliance with the given set of constraints (Yu [0032] teaches the system selects a neural network that has a proper subset of the shared set of parameters (network components) and that satisfies the constraints as the neural network to be used for performing the task”). Regarding claim 14: Yu and Kim teach The computer-implemented method of claim 1, Yu specifically teaches wherein the given set of constraints includes at least one of an accuracy constraint, a memory constraint, a processing operation constraint, an execution time constraint, a latency constraint, and an energy consumption constraint (Yu [0026] teaches where the given constrain can include “resource constraints” that specify how many computational resources will be consumed by the neural network when performing the task when deployed on a target set of hardware devices). Regarding claim 15: Yu and Kim teach The computer-implemented method of claim 1. Yu specifically, teaches wherein the request indicates an order of priority for the given set of constraints; and (Yu [0026] teaches where the given constrain can include “resource constraints” that “specify how many computational resources will be consumed by the neural network” by specifying the computation resources it’s possible to indicate an order of priority). the composing of the plurality of network components is dependent on the indicated order of priority for the given set of constraints ( Yu [0032] teaches the system can select the neural network that has a proper subset of the shared set of parameters (network components) and that satisfied the constraints as the neural network to be used for performing the task). Regarding claim 16: Yu and Kim teach The computer-implemented method of claim 1. Kim specifically teaches wherein the set of trained neural networks is a first set, the method further comprising: (Kim [0042] teaches a set of neural networks). obtaining one or more further sets of trained neural networks, the trained neural networks in each further set configured to performing a respective further common task; and (Kim [0023] teaches artificial neural networks that can learn to perform tasks according to predefines conditions and [0042] teaches obtaining sets of trained neural networks NN1, NN2, ... , and NNn). inspecting the trained neural networks in the one or more further sets to identify that at least some of the plurality of functional blocks are common to at least some of the trained neural networks in the first set and the one or more further sets, (Kim [0036] teaches “network reconstruction module may receive and analyze the neural networks and thus determine a layer group...that the neural networks may commonly use”). wherein the composed plurality of network components includes at least one network component derived from the trained neural networks in the one or more further sets (Kim [0051] teaches the relearning model can store the neural networks and the layer group such that when it receive an operation request it can identify layer groups forming neural networks requires for operation). Regarding claim 18: Yu and Kim teach The computer-implemented method of claim 1, Yu specifically teaches further comprising training the synthesized neural network using machine learning (Yu [0052] teaches training a neural network that satisfies a specific set of constrains such that it can be selected and deployed). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Yu, Kim in further view of Cao et al. WO 202202793 A1 (hereinafter Cao). Regarding claim 5: Yu and Kim teach The computer-implemented method of claim 3. Yu and Kim do not disclose wherein comparing the activations o
Read full office action

Prosecution Timeline

Jan 28, 2022
Application Filed
Feb 12, 2025
Non-Final Rejection — §101, §103, §112
May 13, 2025
Response Filed
Jun 02, 2025
Final Rejection — §101, §103, §112
Oct 28, 2025
Request for Continued Examination
Nov 01, 2025
Response after Non-Final Action
Dec 19, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12511538
HYBRID GRAPH-BASED PREDICTION MACHINE LEARNING FRAMEWORKS
2y 5m to grant Granted Dec 30, 2025
Patent 12450489
METHOD, SYSTEM AND APPARATUS FOR FEDERATED LEARNING
2y 5m to grant Granted Oct 21, 2025
Patent 12393863
Distributed Training Method and System, Device and Storage Medium
2y 5m to grant Granted Aug 19, 2025
Patent 12314852
METHOD FOR RECOMMENDING OBJECT, COMPUTING DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted May 27, 2025
Patent 12242970
Incremental cluster validity index-based offline clustering for machine learning
2y 5m to grant Granted Mar 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+49.2%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month