Prosecution Insights
Last updated: April 18, 2026
Application No. 18/393,518

EDGE DEVICE DEVELOPMENT SUPPORT APPARATUS AND METHOD

Non-Final OA §101§102§103
Filed
Dec 21, 2023
Examiner
BACA, MATTHEW WALTER
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
75%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
83 granted / 113 resolved
+5.5% vs TC avg
Minimal +2% lift
Without
With
+1.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
38 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
20.6%
-19.4% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 113 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/21/2023 was in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 -20 are rejected under 35 U.S.C. 101 because the claimed invention in each of these claims is directed to the abstract idea judicial exception without significantly more. C laim 1 , substantially representative also of independent claim 11, recites: “ [a] n edge device development support apparatus comprising: a user interface unit configured to provide a user interface; and a processor configured to execute an artificial intelligence model on hardware to be used in an edge device, estimate performance of the hardware , calculate a cost of the hardware that is incurred by utilizing the hardware , then select hardware according to the performance and the cost , and output the selected hardware through the user interface . ” The claim limitations considered to fall within in the abstract idea are highlighted in bold font above and the remaining features are “additional elements.” Step 1 of the subject matter eligibility analysis entails determining whether the claimed subject matter falls within one of the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: process, machine, manufacture, or composition of matter. Claim 1 recites a n apparatus and claim 11 recites a method and each therefore falls within a statutory category. Step 2A, Prong One of the analysis entails determining whether the claim recites a judicial exception such as an abstract idea. Under a broadest reasonable interpretation, the highlighted portions of claim 1 fall within the abstract idea judicial exception. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, the highlighted subject matter falls within the mental processes category ( including an observation, evaluation, judgment, opinion). MPEP § 2106.04(a)(2) . T he recited functions: “ estimate performance of the hardware, calculate a cost of the hardware that is incurred by utilizing the hardware, then select hardware according to the performance and the cost ,” may be performed as mental processes. E stimat ing hardware performance may be performed via mental processes (e.g., evaluation of data relating to execution of model and judgment) . C alculat ing a cost of the hardware that is incurred by utilizing the hardware may be performed via mental processes (e.g., evaluation of cost factors such as price or cost-related performance factors such as energy consumption to form judgment). S elect ing hardware according to the performance and the cost may also be performed via mental processes (e.g., evaluation of performance and cost to form judgment) . Step 2A, Prong Two of the analysis entails determining whether the claim includes additional elements that integrate the recited judicial exception into a practical application. “A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception” (MPEP § 2106.04(d)). MPEP § 2106.04(d) sets forth considerations to be applied in Step 2A, Prong Two for determining whether or not a claim integrates a judicial exception into a practical application. Based on the individual and collective limitations of claim 1 and applying a broadest reasonable interpretation, the most applicable of such considerations appear to include: improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)); applying the judicial exception with, or by use of, a particular machine (MPEP 2106.05(b)); and effecting a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)). Regarding improvements to the functioning of a computer or other technology, none of the “additional elements” including “ a user interface unit configured to provide a user interface ,” “ a processor ” configured to implement the method including “ execute an artificial intelligence model on hardware to be used in an edge device ” and “ output the selected hardware through the user interface ,” in any combination appear to integrate the abstract idea in a manner that technologically improves any aspect of a device or system that may be used to implement the highlighted step s or a device for implementing the highlighted step s such as a signal processing device or a generic computer. Instead, a user interface represents computer system functionality (input/output) having no particularized relation to the processing steps, and the processor represents computer/program instruction implementation of the steps falling within the judicial exception and therefore, individually and/or in combination constitute insignificant extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. In the context of claim 1, the step of executing an artificial intelligence model on hardware to be used in an edge device represents computer processing having no clear functional relation to the steps falling within the judicial exception. The execution of the AI model may potentially relate to the performance estimation, in which case it would constitute high-level data collection for the subsequent steps falling within the judicial exception, or may indicate other purposes for the execution. Either way, the execution of the AI model on hardware to be used in an edge device constitutes extra solution activity. The step of outputting the selected hardware represents computer functions having no particularized functional relation to the steps involved in selecting such hardware and therefore also represents extra solution activity. Regarding application of the judicial exception with, or by use of, a particular machine, the additional element s are not configured or otherwise implemented a particularized manner of implementing hardware performance/cost evaluation and hardware selection . Regarding a transformation or reduction of a particular article to a different state or thing, claim 1 does not include any such transformation or reduction. Instead, claim 1 as a whole entails generating input information applicable to hardware performance/cost , applying standard processing techniques ( processor implemented) to the information to determine and output hardware selection information with the additional element s failing to provide a meaningful integration of the abstract idea ( estimating performance, calculating cost, and selecting hardware based on performance and cost ) in an application that transforms an article to a different state. Instead, the additional element s represent extra solution activity that does not integrate the judicial exception into a practical application. In view of the various considerations encompassed by the Step 2A, Prong Two analysis, claim 1 does not include additional elements that integrate the recited abstract idea into a practical application. Therefore, claim 1 is directed to a judicial exception and requires further analysis under Step 2B. Regarding Step 2B, and as explained in the Step 2A Prong Two analysis, the additional element in claim 1 the additional elements constitute extra solution activity and therefore fail to result in the claim as a whole amounting to significantly more than the judicial exception as well as failing to integrate the judicial exception into a practical application. Furthermore, the additional element appears to be generic and well understood as evidenced by the disclosures of Yazdanbakhsh (US 2023/0376664 A1) and Zhang (US 2024/0370693 A1) , each of which t eaches a substantially similar data processing apparatus for implementing edge device development . As explained in the grounds for rejecting claim 1 under 10 2 , Yazdanbakhsh teaches “ a user interface unit configured to provide a user interface,” “a processor” configured to implement the method including “execute an artificial intelligence model on hardware to be used in an edge device” and “output the selected hardware through the user interface ,” as does Zhang ( [0118] hardware selection for edge devices; [0079] search system 100 includes user interface; [0157]-[0158] computer system for implementing method includes processor; [0012] machine learning models executed by hardware accelerator; FIG. 1 depicting outputting of accelerator architecture 150 from computer-implemented search system that per [0079] includes user interface ). Therefore, the additional element s are insufficient to amount to significantly more than the judicial exception. C laim 1 is therefore not patent eligible under 101 . Independent claim 11 includes substantially the same elements falling within the judicial exception as claim 1 and includes no significant additional elements that either integrate the judicial exception into a practical application or result in the claim as a whole amounting to significantly more than the judicial exception. Claim 11 therefore is also not patent eligible under 101. Claims 2-10 depending from claim 1, and 12-20 depending from claim 11 provide additional features/steps which are part of an expanded algorithm that includes the abstract idea of claim 1 (Step 2A, Prong One). None of dependent claims 2-10 and 12-20 recite additional elements that integrate the abstract idea into practical application (Step 2A, Prong Two), and all fail the “significantly more” test under the step 2B for substantially similar reasons as discussed with regards to claim 1 . For example, claim 2, substantially representative also of claim 12, recites that the processor receives the artificial intelligence model through the user interface, which represents conventional computer processing structure and functionality and therefore constitutes extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Claim 3, substantially representative also of claim 13, recites “ wherein the AutoML searches for a structure of an artificial intelligence model that takes a hardware structure of the edge device into consideration ,” which may be performed via mental processes (e.g., evaluation of AI model data and judgment) and therefore falls within the mental processes exception. Claim 3 further recites “ lightens or optimizes the artificial intelligence model ,” which represents preparation of computer instructions for implementing the steps falling within the judicial exception and therefore constitutes extra solution activity, with “ in consideration of performance of the edge device ” also falling within the mental processes exception (e.g., evaluation of edge device performance and judgment in determine the manner of adjusting the AI model). Claim 4, substantially representative also of claim 14 recites the function “ determines whether the performance satisfies a preset performance requirement ,” which may be performed via mental processes (e.g., evaluation and judgment) and therefore falls within the mental processes exception. The additional element “ by reflecting an error in the performance estimation ” represents formulation of output data merely reflecting a result of the processing step and having no particularized functional relation to the steps falling within the judicial exception and therefore constitutes extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Claim 5, substantially representative also of claim 15, recites “ estimates the performance using a hardware structure and artificial intelligence model analysis ” or “ estimates the performance using a lookup table for performance estimation that is stored for each piece of hardware ,” each of which may be performed via mental processes (e.g., evaluation and judgment) and there fall within the mental processes exception. Claim 6, substantially representative also of claim 16, recites “ calculates a cost of the hardware whose performance satisfies the preset performance requirement among the hardware ,” which may be performed via mental processes (e.g., evaluation and judgment) an therefore falls within the mental processes exception. Claim 7, substantially representative also of claim 17, recites “ calculates the cost of the hardware ,” which may be performed via mental processes (e.g., judgment) and therefore falls within the mental processes exception and further recites “ by reflecting preset cost conditions for each piece of hardware ,” which represents high level data gathering having no particularized functional relation to the steps falling within the judicial exception and therefore constitutes extra solution that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Claim 8, substantially representative also of claim 18, recites “ wherein the cost conditions include at least one of price, development difficulty, and hardware development support environment ,” which characterizes the nature of the data processed by the steps falling within the judicial exception and therefore also falls within the judicial exception. Claim 9, substantially representative also of claim 19, recites “ wherein the price is normalized based on prices for released hardware products, and the development difficulty and the hardware development support environment are defined by grade and normalized ,” which falls within the mental processes judicial exception because normalizing may be performed via mental processes (e.g., calculating, possibly aided by pen-and-paper, an average pricing). These steps also fall within the mathematical relations subcategory of the mathematical concepts judicial exception because normalization of product prices is fundamentally characterized by mathematical relations and calculations. Claim 10, substantially representative also of claim 20, recites “ selects hardware by ” reflecting a weight on at least one of the performance and the cost , which may be performed via mental processes (e.g., evaluation of performance and/or cost and judgment in selecting hardware) and therefore falls within the mental processes judicial exception. The additional element “ reflecting a weight on at least one of the performance and the cost ,” represents characterization of the data processed by the step falling within the judicial exception and therefore also falls within the judicial exception. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-7, 11, and 15-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yazdanbakhsh (US 2023/0376664 A1) . As to claim 1, Yazdanbakhsh teaches “[a] n edge device development support apparatus (Abstract system for determining hardware accelerator apparatus; FIG. 2 search system for determining architecture of hardware accelerator; [0006] hardware accelerator may be implemented in edge device) comprising: a user interface unit configured to provide a user interface ([0126]-[0127] system may be implement by computer system having a user interface) ; and a processor ([0116] and [0121] computer system includes a processor) configured to execute an artificial intelligence model on hardware ([0071] hardware performance evaluation engine 140 evaluates performance (execution) of hardware accelerator configurations (proposed via hardware design policy 120) via simulation execution (e.g., simulating instance of hardware accelerator) of neural network in performing machine learning tasks ) to be used in an edge device ([0006] hardware accelerator may be implemented in edge device) , estimate performance of the hardware ([0071] estimate latency of hardware implementing machine learning task) , calculate a cost of the hardware that is incurred by utilizing the hardware ([0071] estimate hardware accelerator area; [0076] pre-evaluation criteria 130 used to pre-filter hardware accelerator designs based on pre-estimated area or power consumption) , then select hardware according to the performance and the cost ([0004] and [0067] selection of hardware accelerator architecture based on cost factors (area or power consumption) and performance (latency)) , and output the selected hardware (FIG. 1 hardware architecture search system 100 configured to output hardware accelerator architecture 150 , [0078]) through the user interface ([0079] hardware architecture output to a user) . ” As to claim 5, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 1, wherein the processor ” estimates the performance by executing the artificial intelligence model on an actual hardware platform, “ estimates the performance using a hardware structure and artificial intelligence model analysis (FIG. 1 search system 100 determines combined hardware/model performance measure 144 by receiving and processing hardware structure in terms of hardware architecture data 106 and further receives and processes AI model information in terms of neural network architecture data 108, training data 102, and validation data 104; [0070]-[0071] evaluation engine 140 evaluates performance of hardware for particular machine learning tasks of particular neural network) , estimates the performance using a lookup table for performance estimation that is stored for each piece of hardware ([0110] look-up table may be used to determine hardware performance metrics (e.g., power consumption)) , ” or estimates the performance using a machine learning model developed based on performance data obtained by executing the artificial intelligence model on actual hardware. As to claim 6, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 1, wherein the processor calculates a cost of the hardware whose performance satisfies the preset performance requirement among the hardware ([0004] and [0067] selection of hardware accelerator architecture based on determined/calculated cost factors (area or power consumption) and performance (latency) . Examiner notes that claim 6 does not require a step of actually determining whether the hardware satisfies preset performance requirements such that the “satisfies” condition on the cost calculation may occur incidentally. Furthermore, even assuming a determination of the “satisfies” condition, the sequence of the condition being determined prior to the cost calculation is not established in claim 6 ; FIG. 1 search system 100 receives constraint data 110 ; [0065]-[0066] constraint data used to evaluate performance includes latency constraints) . ” As to claim 7, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 1, wherein the processor calculates the cost of the hardware by reflecting preset cost conditions for each piece of hardware (FIG. 1 search system 100 receives constraint data 110; [0066]-[0067] constraint data may include target area and/or power consumption associated with the hardware) . ” As to claim 11, Yazdanbakhsh teaches “[a] n edge device development support method (Abstract method for determining hardware accelerator apparatus; FIG. 2 search system for implementing method to determine architecture of hardware accelerator; [0006] hardware accelerator may be implemented in edge device) comprising: executing, by a processor ([0126]-[0127] system may be implement by computer system; [0116] and [0121] computer system includes a processor) , an artificial intelligence model on hardware ([0071] hardware performance evaluation engine 140 evaluates performance (execution) of hardware accelerator configurations (proposed via hardware design policy 120) via simulation execution (e.g., simulating instance of hardware accelerator) of neural network in performing machine learning tasks) to be used in an edge device ([0006] hardware accelerator may be implemented in edge device) and estimating performance of the hardware ([0071] estimate latency of hardware implementing machine learning task) ; calculating, by the processor, a cost of the hardware that is incurred by utilizing the hardware ([0071] estimate hardware accelerator area; [0076] pre-evaluation criteria 130 used to pre-filter hardware accelerator designs based on pre-estimated area or power consumption) ; and selecting, by the processor, hardware according to the performance and the cost ([0004] and [0067] selection of hardware accelerator architecture based on cost factors (area or power consumption) and performance (latency)) . As to claim 15, Yazdanbakhsh teaches “[t] he edge device development support method of claim 11, wherein, in the estimating of the performance of the hardware, the processor ” estimates the performance by executing the artificial intelligence model on an actual hardware platform, “ estimates the performance using a hardware structure and artificial intelligence model analysis (FIG. 1 search system 100 determines combined hardware/model performance measure 144 by receiving and processing hardware structure in terms of hardware architecture data 106 and further receives and processes AI model information in terms of neural network architecture data 108, training data 102, and validation data 104; [0070]-[0071] evaluation engine 140 evaluates performance of hardware for particular machine learning tasks of particular neural network) , estimates the performance using a lookup table for performance estimation that is stored for each piece of hardware ([0110] look-up table may be used to determine hardware performance metrics (e.g., power consumption)) , ” or estimates the performance using a machine learning model developed based on performance data obtained by executing the artificial intelligence model on actual hardware. As to claim 16, Yazdanbakhsh teaches “[t] he edge device development support method of claim 11, wherein, in the calculating of the cost of the hardware, the processor calculates a cost of the hardware whose performance satisfies the preset performance requirement among the hardware ([0004] and [0067] selection of hardware accelerator architecture based on determined/calculated cost factors (area or power consumption) and performance (latency). Examiner notes that claim 16 does not require a step of actually determining whether the hardware satisfies preset performance requirements such that the “satisfies” condition on the cost calculation may occur incidentally. Furthermore, even assuming a determination of the “satisfies” condition, the sequence of the condition being determined prior to the cost calculation is not established in claim 16; FIG. 1 search system 100 receives constraint data 110 ; [0065]-[0066] constraint data used to evaluate performance includes latency constraints) . ” As to claim 17, Yazdanbakhsh teaches “[t] he edge device development support method of claim 11, wherein, in the calculating of the cost of the hardware, the processor calculates the cost of the hardware by reflecting preset cost conditions for each piece of hardware (FIG. 1 search system 100 receives constraint data 110; [0066]-[0067] constraint data may include target area and/or power consumption associated with the hardware) . ” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Yazdanbakhsh in view of Gupta et. al., “Accelerator-Aware Neural Network Design Using AutoML ,” On-device Intelligence Workshop 3 rd SysML Conference, Austin, CA 2020. As to claim 2, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 1, wherein the processor receives the artificial intelligence model (FIG. 1 search system 100 receives neural network architectural data 108, [0058]-[0059 ]) through the user interface unit ([0064] neural network architectural data may be received from user remotely or from local system) or receives data for a function (FIG. 1 search system 100 receives neural network architectural data 108 specifying AI tasks, [0062]) and training of the artificial intelligence model (FIG. 1 search system 100 receives neural network training data 102, [0063]) through the user interface unit ([0064] neural network architectural data and training data may be received from user remotely or from local system) . ” In [0059], Yazdanbakhsh teaches that the neural network architecture may be generated in any one of several known ways including as a documented by Gupta, which teaches an AutoML method for generating hardware-aware AI (neural network) designs to be implemented on hardware accelerators (Abstract; Figure 1; and page 4, Conclusion, describing and depicting method, which per title constitutes AutoML , for generating NN models to be executed on a hardware accelerator) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Gupta’s teaching of using an AutoML technique/system for generating the AI model to be executed on a hardware accelerator to the apparatus taught by Yazdanbakhsh , such that in combination the apparatus is configured such that the processor generates the artificial intelligence model through AutoML . Such a combination is directed suggested by Yazdanbakhsh and would have amounted to selecting a known design option for generating an AI model to be implemented on a hardware accelerator to achieve predictable results. Furthermore, a particular motivation would have been to leverage an AutoML process to optimize the AI model for execution on particular hardware architectures as disclosed by Gupta. As to claim 3, the combination of Yazdanbakhsh and Gupta teaches “[t] he edge device development support apparatus of claim 2, ” and Gupta further teaches “ wherein the AutoML searches for a structure of an artificial intelligence model that takes a hardware structure of the edge device into consideration (Figure 1 controller configured to search accelerator-aware NN search space to provide candidate models; page 2, 2 Methodology, paragraph beginning with “A typical neural architecture …”) and ” lightens or “ optimizes the artificial intelligence model in consideration of performance of the edge device (page 1, 1 Introduction, paragraphs beginning with “Meanwhile there is …” and This accelerator-aware NAS …” describing the NN model as being built for estimating latency on target hardware to improve combined performance; Figure 1 depicting iterative process in which performance (latency and accuracy) is evaluated for each candidate model including determination of combined performance “Accelerator NN latency predictor,” and in which an objective function “Multi-objective reward function” provides feedback to optimize model design; page 2 Methodology, paragraph beginning with “A typical neural architecture …”) . ” It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Gupta’s teaching of using an iterative search/optimization processor for searching for AI models in a manner while providing performance feedback to optimize combined model/hardware performance to the apparatus taught by Yazdanbakhsh as modified by Gupta such that in combination the apparatus is configured such that the AutoML searches for a structure of an artificial intelligence model that takes a hardware structure of the edge device into consideration and optimizes the artificial intelligence model in consideration of performance of the edge device . The motivation would have been to select/generate AI model characteristics having optimal performance characteristics based on a given hardware accelerator design as disclosed by Gupta. As to claim 12, Yazdanbakhsh teaches “[t] he edge device development support method of claim 11, wherein, in the estimating of the performance of the hardware, the processor receives the artificial intelligence model (FIG. 1 search system 100 receives neural network architectural data 108, [0058]-[0059 ]) through a user interface unit ([0064] neural network architectural data may be received from user remotely or from local system) or receives data for a function (FIG. 1 search system 100 receives neural network architectural data 108 specifying AI tasks, [0062]) and training of the artificial intelligence model (FIG. 1 search system 100 receives neural network training data 102, [0063]) through the user interface unit ([0064] neural network architectural data and training data may be received from user remotely or from local system) .” In [0059], Yazdanbakhsh teaches that the neural network architecture may be generated in any one of several known ways including as a documented by Gupta, which teaches an AutoML method for generating hardware-aware AI (neural network) designs to be implemented on hardware accelerators (Abstract; Figure 1; and page 4, Conclusion, describing and depicting method, which per title constitutes AutoML , for generating NN models to be executed on a hardware accelerator) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Gupta’s teaching of using an AutoML technique/system for generating the AI model to be executed on a hardware accelerator to the method taught by Yazdanbakhsh , such that in combination the method is configured such that the processor generates the artificial intelligence model through AutoML . Such a combination is directed suggested by Yazdanbakhsh and would have amounted to selecting a known design option for generating an AI model to be implemented on a hardware accelerator to achieve predictable results. Furthermore, a particular motivation would have been to leverage an AutoML process to optimize the AI model for execution on particular hardware architectures as disclosed by Gupta. As to claim 13, the combination of Yazdanbakhsh and Gupta teaches “[t] he edge device development support method of claim 12, ” and Gupta further teaches “ wherein the AutoML searches for a structure of an artificial intelligence model that takes a hardware structure of the edge device into consideration (Figure 1 controller configured to search accelerator-aware NN search space to provide candidate models; page 2, 2 Methodology, paragraph beginning with “A typical neural architecture …”) and ” lightens or “ optimizes the artificial intelligence model in consideration of performance of the edge device (page 1, 1 Introduction, paragraphs beginning with “Meanwhile there is …” and This accelerator-aware NAS …” describing the NN model as being built for estimating latency on target hardware to improve combined performance; Figure 1 depicting iterative process in which performance (latency and accuracy) is evaluated for each candidate model including determination of combined performance “Accelerator NN latency predictor,” and in which an objective function “Multi-objective reward function” provides feedback to optimize model design; page 2 Methodology, paragraph beginning with “A typical neural architecture …”) . ” It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Gupta’s teaching of using an iterative search/optimization processor for searching for AI models in a manner while providing performance feedback to optimize combined model/hardware performance to the method taught by Yazdanbakhsh as modified by Gupta such that in combination the method is configured such that the AutoML searches for a structure of an artificial intelligence model that takes a hardware structure of the edge device into consideration and optimizes the artificial intelligence model in consideration of performance of the edge device . The motivation would have been to select/generate AI model characteristics having optimal performance characteristics based on a given hardware accelerator design as disclosed by Gupta. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Yazdanbakhsh in view of Bastani (US 2020/0356881 A1) . As to claim 4, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 1, wherein the processor determines whether the performance satisfies a preset performance requirement ( FIG. 1 search system 100 receives constraint data 110 ; [0065]-[0066] constraint data used to evaluate performance includes latency constraints) .” Yazdanbakhsh does not appear to teach that the determination of whether a performance satisfies a preset requirement includes “reflecting an error in the performance estimation.” Bastani discloses a method /system for analyzing performance parameters of substrates [0006], that includes reflecting an error in performance estimates (FIG. 8 “error bars” indicated/displayed for predictive residual reduction (PRR); [0085] error bars indicate a range of deviation of PRR values) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Bastani’s teaching of reflecting/indicating an error in performance estimates to the apparatus taught by Yazdanbakhsh , such that in combination the processor determines whether the performance satisfies a preset performance requirement by reflecting an error in the performance estimation. The motivation for providing an indication of an error margin within which performance values may be determined would be to convey potential relative error margins involved in assessing how precisely estimated performance values conform to predetermined performance requirements. As to claim 14, Yazdanbakhsh teaches “[t] he edge device development support method of claim 11, wherein, in the estimating of the performance of the hardware, the processor determines whether the performance satisfies a preset performance requirement ( FIG. 1 search system 100 receives constraint data 110 ; [0065]-[0066] constraint data used to evaluate performance includes latency constraints) .” Yazdanbakhsh does not appear to teach that the determination of whether a performance satisfies a preset requirement includes “reflecting an error in the performance estimation.” Bastani discloses a method/system for analyzing performance parameters of substrates [0006], that includes reflecting an error in performance estimates (FIG. 8 “error bars” indicated/displayed for predictive residual reduction (PRR); [0085] error bars indicate a range of deviation of PRR values) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Bastani’s teaching of reflecting/indicating an error in performance estimates to the apparatus taught by Yazdanbakhsh , such that in combination the processor determines whether the performance satisfies a preset performance requirement by reflecting an error in the performance estimation. The motivation for providing an indication of an error margin within which performance values may be determined would be to convey potential relative error margins involved in assessing how precisely estimated performance values conform to predetermined performance requirements. Claims 8-9 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Yazdanbakhsh in view of Pati, et. al., “Impact of Inference Accelerators on Hardware Selection,” arXiv : 1910.03060v1, 7 Oct 2019. As to claim 8, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 7, ” but does not appear to teach “ wherein the cost conditions include at least one of price, development difficulty, and hardware development support environment. ” Pati discloses a method for selecting AI accelerator configurations that accounts for costs (Abstract) including price of hardware (page 2, 2 Methods, paragraphs beginning with “Our method involves …” and “In order to establish the cost …” describing hardware costs and including price) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Pati’s disclosure of hardware pricing as part of the hardware cost to be considered to for determining optimal hardware/AI model configurations to the apparatus taught by Yazdanbakhsh , such that in combination the apparatus is configured to include price as a cost condition. The motivation would have been to account for monetary expenditures in selecting a hardware configuration to minimize such expenditures in a selected hardware configuration as disclosed by Pati. As to claim 9, the combination of Yazdanbakhsh and Pati teaches “[t] he edge device development support apparatus of claim 8, ” and Pati further teaches “ wherein the price is normalized based on prices for released hardware products (page 2, 2 Methods, paragraph beginning with “In order to establish the cost …” describing hardware costs and including price that may be averaged (normalized) among multiple cloud providers of the hardware (broadest reasonable interpretation of “released hardware products” entails released use of the hardware by the providers)) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Pati’s teaching of normalizing the price for release hardware products to the apparatus taught by Yazdanbakhsh as modified by Pati to including pricing as part of hardware costs, such that in combination the apparatus is configured to generate and/or use a price that is normalized based on prices for released hardware products. The motivation would have been to generate a more reliable overall estimate of hardware costs by attenuating price outliers as suggested by Pati. As to claim 18, Yazdanbakhsh teaches “[t] he edge device development support method of claim 17, ” but does not appear to teach “ wherein the cost conditions include at least one of price, development difficulty, and hardware development support environment. ” Pati discloses a method for selecting AI accelerator configurations that accounts for costs (Abstract) including price of hardware (page 2, 2 Methods, paragraphs beginning with “Our method involves …” and “In order to establish the cost …” describing hardware costs and including price) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Pati’s disclosure of hardware pricing as part of the hardware cost to be considered to for determining optimal hardware/AI model configurations to the method taught by Yazdanbakhsh , such that in combination the method is configured to include price as a cost condition. The motivation would have been to account for monetary expenditures in selecting a hardware configuration to minimize such expenditures in a selected hardware configuration as disclosed by Pati. As to claim 19, the combination of Yazdanbakhsh and Pati teaches “[t] he edge device development support method of claim 18, ” and Pati further teaches “ wherein the price is normalized based on prices for released hardware products (page 2, 2 Methods, paragraph beginning with “In order to establish the cost …” describing hardware costs and including price that may be averaged (normalized) among multiple cloud providers of the hardware (broadest reasonable interpretation of “released hardware products” entails released use of the hardware by the providers)) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Pati’s teaching of normalizing the price for release hardware products to the apparatus taught by Yazdanbakhsh as modified by Pati to including pricing as part of hardware costs, such that in combination the apparatus is configured to generate and/or use a price that is normalized based on prices for released hardware products. The motivation would have been to generate a more reliable overall estimate of hardware costs by attenuating price outliers as suggested by Pati. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yazdanbakhsh in view of Davidi (US 2022/0147801 A1) . As to claim 10, Yazdanbakhsh teaches “[t] he edge device development support apparatus of claim 1, ” but does not appear to teach “ wherein the processor selects hardware by reflecting a weight on at least one of the performance and the cost. ” Davidi discloses a method/system for determining hardware accelerator architectures to be used for AI accelerations (claim 1) that includes configuring/updating hardware design, in part, by weighting performance parameters that also qualify as cost parameters ([0005] compute performance metric by weighting performance profile parameters based on power performance and area (PPA) constraints and updating hardware configuration based on the performance metric; claim 4 multiply each parameter of the performance profile by a weight based on the PPA constraints and update hardware configuration based on a sum of performance metrics determined by multiplying performance profile parameters by a corresponding weight) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Davidi’s teaching of weighting performance and cost parameters as part of the hardware selection/configuration process to the apparatus taught by Yazdanbakhsh , such that in combination the processor selects hardware by reflecting a weight on at least one of the performance and the cost . The motivation would have been to enable more specific customization of combined hardware/modeling architectures that accounts for performance and/or cost factors in a more controllable manner as suggested by Davidi . As to claim 20, Yazdanbakhsh teaches “[t] he edge device development support method of claim 11, ” but does not appear to teach “ wherein , in the selecting of the hardware, the processor selects hardware by reflecting a weight on at least one of the performance and the cost. ” Davidi discloses a method/system for determining hardware accelerator architectures to be used for AI accelerations (claim 1) that includes configuring/updating hardware design, in part, by weighting performance parameters that also qualify as cost parameters ([0005] compute performance metric by weighting performance profile parameters based on power performance and area (PPA) constraints and updating hardware configuration based on the performance metric; claim 4 multiply each parameter of the performance profile by a weight based on the PPA constraints and update hardware configuration based on a sum of performance metrics determined by multiplying performance profile parameters by a corresponding weight) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Davidi’s teaching of weighting performance and cost parameters as part of the hardware selection/configuration process to the method taught by Yazdanbakhsh , such that in combination the processor selects hardware by reflecting a weight on at least one of the performance and the cost . The motivation would have been to enable more specific customization of combined hardware/modeling architectures that accounts for performance and/or cost factors in a more controllable manner as suggested by Davidi . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MATTHEW W BACA whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-2507 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday 8:00 am - 5:30 pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Andrew Schechter can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-2302 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW W. BACA/ Examiner, Art Unit 2857 /ANDREW SCHECHTER/ Supervisory Patent Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Dec 21, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601701
MULTI-FREQUENCY SENSING SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12585038
METHOD FOR OPERATING A METAL DETECTOR AND METAL DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12551192
ULTRASONIC DIAGNOSTIC APPARATUS AND MEDICAL IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12504371
SYSTEM AND METHOD OF DYNAMIC MICRO-OPTICAL COHERENCE TOMOGRAPHY FOR MAPPING CELLULAR FUNCTIONS
2y 5m to grant Granted Dec 23, 2025
Patent 12493093
REDUCTION OF OFF-RESONANCE EFFECTS IN MAGNETIC RESONANCE IMAGING
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
75%
With Interview (+1.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month