Prosecution Insights
Last updated: April 19, 2026
Application No. 18/239,759

NEURAL NETWORK OPTIMIZATION WITH PREVIEW MECHANISM

Non-Final OA §101§103
Filed
Aug 30, 2023
Examiner
SOMERS, MARC S
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
MediaTek Inc.
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
364 granted / 563 resolved
+9.7% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
36 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§101 §103
CTNF 18/239,759 CTNF 84295 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement filed 9/13/2023 fails to comply with 37 CFR 1.98(b)(5) because it does not include relevant pages of the publication and publisher (which can be the conference or journal where the paper was introduced at). It has been placed in the application file, but the information referred to therein has not been considered. None of the NPL references (1-12) have the required information as outlined in 37 CFR 1.98(b)(5). Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With regard to claim 1: Step 2A, Prong One: The claim recites the following limitations which are drawn towards an abstract idea: A neural network optimization method with a preview mechanism, comprising: generating an updating signal according to a reference value corresponding to the multiple previewed results, and processing the optimization space received in the preview stage according to the reference value (recites mental process steps of evaluation/analysis and comparison to from a determination/judgement (see paragraph 17 for discussion of updating signal to determine whether to continue the search/evaluation or to stop); and processing the optimization space received in the view stage according to the updating signal to generate an optimization result (recites mental process steps of evaluation and analysis to form a determination or selection of a particular received data to be the configuration/data of choice). As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below. Step 2A, Prong Two: The following limitations have been identified as being additional elements as discussed below. in a preview stage, building an optimization space and obtaining multiple previewed results from the optimization space (recites insignificant extrasolution activity of mere data gathering/receiving information, see MPEP 2106.05(g)); and in a view stage, receiving the optimization space and the updating signal (recites insignificant extrasolution activity of receiving information, see MPEP 2106.05(g)), As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional elements recite merely gathering/receiving data/information at a high-level of generality. Step 2B: Below is the analysis of the claims: in a preview stage, building an optimization space and obtaining multiple previewed results from the optimization space (recites well-understood, routine, and conventional activity of mere data gathering/receiving information, see MPEP 2106.05(d)); and in a view stage, receiving the optimization space and the updating signal (recites well-understood, routine, and conventional activity of receiving information, see MPEP 2106.05(d)). As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements recite merely gathering/receiving data/information at a high-level of generality. With regard to claim 2, this claim recites wherein the step of obtaining the multiple previewed results from the optimization space comprises: sampling the optimization space to obtain multiple candidate networks (recites mental process steps of selecting particular samples or configurations); evaluating the multiple candidate networks to obtain multiple evaluated results (recites mental process steps of evaluating/analysis of selected samples/configurations); and obtaining multiple previewed neural networks from the multiple candidate networks as the multiple previewed results according to the multiple evaluated results (recites mental process step of a judgment/decision on the samples that warrant further consideration). With regard to claim 3, this claim recites wherein the step of evaluating the multiple candidate networks to obtain the multiple evaluated results comprises: utilizing a quality estimator to estimate quality of the multiple candidate networks for obtaining the multiple evaluated results (recites mental process steps of evaluation/estimating a quality score/value for respective sampled data where the score can be determined via guessing/observations of past behavior/knowledge or computed, possibly via mathematical equations). With regard to claim 4, this claim recites wherein the step of evaluating the multiple candidate networks to obtain the multiple evaluated results comprises: utilizing a performance estimator to estimate platform performance of the multiple candidate networks for obtaining the multiple evaluated results (recites mental process steps of evaluation/estimating a performance score/value for respective sampled data where the score can be determined via guessing/observations of past behavior/knowledge or computed, possibly via mathematical equations). With regard to claim 5, this claim recites wherein the step of generating the updating signal according to the reference value corresponding to the multiple previewed results, and processing the optimization space received in the preview stage according to the reference value comprises: determining whether the reference value exceeds a limitation value; in response to the reference value exceeding the limitation value, stopping previewing the optimization space received in the preview stage (recites mental process steps of evaluating/comparing data sets and deciding that the task is complete), and directly outputting the multiple previewed results to the view stage as the updating signal (recites insignificant extrasolution activity of transmitting information which amounts to well-understood, routine, and conventional activity of transmitting information, see MPEP 2106.05(d)); in response to the reference value not exceeding the limitation value, adjusting the optimization space received in the preview stage according to the reference value, to generate multiple adjusted optimization spaces (recites mental process steps of evaluating/comparing data sets and decide that more work for the task should continue); and collecting the multiple adjusted optimization spaces (recites insignificant extrasolution activity of receiving information which amounts to well-understood, routine, and conventional activity of receiving information, see MPEP 2106.05(d)). With regard to claim 6, this claim recites wherein the reference value is a time for previewing the optimization space received in the preview stage, the limitation value is a predetermined time, and the step of determining whether the reference value exceeds the limitation value comprises: determining whether the time exceeds the predetermined time (recites mental process steps of comparing/evaluating the amount of time elapsed to some criteria/threshold, similar to a test time limit). With regard to claim 7, this claim recites wherein the reference value is a metric for previewing the optimization space received in the preview stage, the limitation value is a predetermined criterion, and the step of determining whether the reference value exceeds the limitation value comprises: determining whether the metric exceeds the predetermined criterion (recites mental process steps of comparing/evaluating based on some metric, e.g. number of times/rounds/iterations to do a task). With regard to claim 8, this claim recites wherein the step of processing the optimization space received in the view stage according to the updating signal to generate the optimization result comprises: updating the optimization space received in the view stage according to the updating signal, to generate an updated optimization space (recites mental process steps of evaluation and analysis to form a determination or selection of a particular received data to be the configuration/data of choice); training neural networks in the updated optimization space to obtain a training result (recites generic training steps of a computer element which amounts to usage of a computer as a tool to implement the abstract idea, see MPEP 2106.05(f)); optimizing the neural networks in the updated optimization space according to the training result, to generate optimized neural networks (recites training/optimizing the respective model which can amount to evaluating/testing the neural network and amounts to usage of a computer as a tool to implement the abstract idea by performing the judicial exception on a computer); and fine-tuning the optimized neural networks to generate the optimization result (recites training/optimizing the respective model which can amount to evaluating/testing the neural network, possible a trial run to receive feedback or with other training/testing data and amounts to usage of a computer as a tool to implement the abstract idea by performing the judicial exception on a computer). With regard to claim 9, this claim recites wherein the step of processing the optimization space received in the view stage according to the updating signal comprises: stopping viewing the optimization space received in the view stage according to the updating signal (recites mental process step of evaluation/comparison to form a judgement to stop doing something). With regard to claim 10, this claim recites wherein the neural network optimization method is applied to a neural architecture search (NAS) (recites field of use limitations describing a particular, or preferred, technique, see MPEP 2106.05(h)). With regard to claims 11-20, these claims are substantially similar to claims 1-10 and are rejected for similar reasons as claims 1-10 as discussed above. Claim Rejections - 35 USC § 103 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-20-02-aia AIA This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 07-21-aia AIA Claim s 1-5, 7-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Akhauri et al [US 2022/0108054 A1] in view of Chang et al [US 2023/0359885 A1] . With regard to claim 1, Akhauri teaches a neural network optimization method with a preview mechanism, comprising: in a preview stage, building an optimization space and obtaining multiple previewed results from the optimization space (see paragraphs [0020], [0034], [0035], and [0060]-[0061]; the system can utilize an optimized space and be able to receive multiple candidate/previewed results of a respective subspace); generating an updating signal according to a reference value corresponding to the multiple previewed results, and processing the optimization space received in the preview stage according to the reference value (see paragraph [0038] and [0064]-[0065]; the system can generate an update signal based on evaluation to reference values and process an action accordingly including identifying whether the preview stage is complete); and in a view stage, receiving the optimization space and the updating signal , and processing the optimization space received in the view stage according to the updating signal to generate an optimization result (see paragraph [0062]; the system can utilize the search space/optimization space and be able to process that space with respect to the selected result and be optimized for that entire space). Akhauri does not appear to explicitly teach: generating an updating signal according to a reference value corresponding to the multiple previewed results, and processing the optimization space received in the preview stage according to the reference value ; and in a view stage, receiving the optimization space and the updating signal , and processing the optimization space received in the view stage according to the updating signal to generate an optimization result. Chang teaches generating an updating signal according to a reference value corresponding to the multiple previewed results (see paragraph [0060]; the system can utilize a reference value for determining how many iterations to perform of the updating machine learning model). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the training/evaluation process of various models/algorithms of Akhauri by utilizing additional parameters such as number of iterations/epochs as a reference value as taught by Chang in order to allow the system to evaluate the various models/algorithms while also considering a reference value or some sufficient number of iterations so that the system doesn’t unnecessarily bog itself down with algorithms/models that won’t converge without extensive training thus allowing for a fair evaluation of the respective models with respect to their performance after some sufficient amount of time training/updating. Akhauri in view of Chang teach processing the optimization space received in the preview stage according to the reference value (see Akhauri, paragraph [0038] and [0064]-[0065]; see Chang, paragraph [0060]; the system can generate an update signal based on evaluation to reference values and process an action accordingly including identifying whether the preview stage is complete and, if so, to use the determined/selected candidate controller/result and associated parameters); and in a view stage, receiving the optimization space and the updating signal , and processing the optimization space received in the view stage according to the updating signal to generate an optimization result (see Akhauri, paragraph [0062]; see Chang, paragraph [0060]; the system can utilize the search space/optimization space and be able to process that space with respect to the selected result and be optimized for that entire space). With regard to claim 2, Akhauri in view of Chang teach wherein the step of obtaining the multiple previewed results from the optimization space comprises: sampling the optimization space to obtain multiple candidate networks; evaluating the multiple candidate networks to obtain multiple evaluated results; and obtaining multiple previewed neural networks from the multiple candidate networks as the multiple previewed results according to the multiple evaluated results (see Akhauri, paragraphs [0020], [0034], [0035], and [0060]; the system can sample the optimization space and evaluate multiple candidate networks to determine candidate controllers/networks that can be ranked/scored). With regard to claim 3, Akhauri in view of Chang teach wherein the step of evaluating the multiple candidate networks to obtain the multiple evaluated results comprises: utilizing a quality estimator to estimate quality of the multiple candidate networks for obtaining the multiple evaluated results (see Akhauri, paragraphs [0038] and [0064]-[0065]; the system can utilize some methodology to estimate expected quality of the respective candidate network). With regard to claim 4, Akhauri in view of Chang teach wherein the step of evaluating the multiple candidate networks to obtain the multiple evaluated results comprises: utilizing a performance estimator to estimate platform performance of the multiple candidate networks for obtaining the multiple evaluated results (see Akhauri, paragraph [0037]; the system allows for the evaluation of the respective candidates with respect to performance within the respective search subspace). With regard to claim 5, Akhauri in view of Chang teach wherein the step of generating the updating signal according to the reference value corresponding to the multiple previewed results, and processing the optimization space received in the preview stage according to the reference value comprises: determining whether the reference value exceeds a limitation value; in response to the reference value exceeding the limitation value, stopping previewing the optimization space received in the preview stage, and directly outputting the multiple previewed results to the view stage as the updating signal; in response to the reference value not exceeding the limitation value, adjusting the optimization space received in the preview stage according to the reference value, to generate multiple adjusted optimization spaces; and collecting the multiple adjusted optimization spaces (see Akhauri, paragraphs [0020] and [0037]; see Chang, paragraph [0060]; the system can utilize the reference value to allow for sufficient iterations to occur with respect to the relative value to determine when to stop previewing/evaluating respective candidates). With regard to claim 7, Akhauri in view of Chang teach wherein the reference value is a metric for previewing the optimization space received in the preview stage, the limitation value is a predetermined criterion, and the step of determining whether the reference value exceeds the limitation value comprises: determining whether the metric exceeds the predetermined criterion (see Chang, paragraph [0060]; see Akhauri, paragraph [0037]; the system allows for some predetermined criterion to be utilized as the reference value with means to determine if the metric value has been exceeded). With regard to claim 8, Akhauri in view of Chang teach wherein the step of processing the optimization space received in the view stage according to the updating signal to generate the optimization result comprises: updating the optimization space received in the view stage according to the updating signal, to generate an updated optimization space; training neural networks in the updated optimization space to obtain a training result; optimizing the neural networks in the updated optimization space according to the training result, to generate optimized neural networks; and fine-tuning the optimized neural networks to generate the optimization result (see Akhauri, paragraph [0036]-[0037] and [0060]-[0062]; see Chang, paragraph [0060]; the system can utilize the learned parameters associated with each respective optimized design of the respective candidates for scoring with respect to additional subspaces including re- training/tuning to determine their overall results in order to find the controller that most frequently performances best for the various optimization spaces). With regard to claim 9, Akhauri in view of Chang teach wherein the step of processing the optimization space received in the view stage according to the updating signal comprises: stopping viewing the optimization space received in the view stage according to the updating signal (see Akhauri, paragraphs [0062] and [0064]; the system can determine via an updating signal whether the system should stop the viewing/evaluating of the various candidates since a candidate has been found that meets various performance and quality criteria). With regard to claim 10, Akhauri in view of Chang teach wherein the neural network optimization method is applied to a neural architecture search (NAS) (see Akhauri, paragraphs [0034], [0041], and [0103]; see Chang, paragraph [0047]; NAS is used). With regard to claim 11, this claim is substantially similar to claim 1 and is rejected for similar reasons as discussed above. With regard to claims 12-15 and 17-20, these claims are substantially similar to claims 2-5 and 7-10 respectively and are rejected for similar reasons as discussed above . 07-21-aia AIA Claim s 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Akhauri et al [US 2022/0108054 A1] in view of Chang et al [US 2023/0359885 A1] in further view of Benyahia et al [US 2020/0104688 A1] . With regard to claim 6. Akhauri in view of Chang teach all the claim limitations of claims 1 and 5 as discussed above. Akhauri in view of Chang do not appear to explicitly teach wherein the reference value is a time for previewing the optimization space received in the preview stage, the limitation value is a predetermined time, and the step of determining whether the reference value exceeds the limitation value comprises: determining whether the time exceeds the predetermined time. Benyahia teaches wherein the reference value is a time for previewing the optimization space received in the preview stage, the limitation value is a predetermined time, and the step of determining whether the reference value exceeds the limitation value comprises: determining whether the time exceeds the predetermined time (see paragraph [0109]; the system can train the respective candidate models based on a reference value associated with a time threshold with training continuing until the respective time threshold is met). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the training/evaluation process of various models/algorithms of Akhauri in view of Chang by utilizing a time threshold as means for determining how long training should occur as taught by Benyahia in order to allow the system to evaluate the various models/algorithms within a time period thus helping to ensure that the system doesn’t unnecessarily bog itself down with algorithms/models that take a long time even when there is a large quantity of training data/batches that can be used thus helping to see how the respective models perform equally since they all have the same amount of time training. With regard to claim 16, this claim is substantially similar to claim 6 and is rejected for similar reasons as discussed above . Conclusion 07-96 AIA The prior art made of record and not relied upon is considered pertinent to applicant's disclosure : Mazzawi et al [US 2021/0019599 A1] teaches at paragraphs [0006], [0026], and [0030]-[0032] the ability to explore smaller candidate neural networks and evaluate their performances so as to choose the final architecture after the search process is terminated. Chen et al [US 2023/0064692 A1] teaches at Figure 5 and paragraph [0035] the ability to partition a search space and be able to sample a network architecture in each network search space and evaluate that architecture with regards to performance. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARC S SOMERS/Primary Examiner, Art Unit 2159 3/16/2026 Application/Control Number: 18/239,759 Page 2 Art Unit: 2159 Application/Control Number: 18/239,759 Page 3 Art Unit: 2159 Application/Control Number: 18/239,759 Page 4 Art Unit: 2159 Application/Control Number: 18/239,759 Page 5 Art Unit: 2159 Application/Control Number: 18/239,759 Page 7 Art Unit: 2159 Application/Control Number: 18/239,759 Page 8 Art Unit: 2159 Application/Control Number: 18/239,759 Page 10 Art Unit: 2159 Application/Control Number: 18/239,759 Page 11 Art Unit: 2159 Application/Control Number: 18/239,759 Page 12 Art Unit: 2159 Application/Control Number: 18/239,759 Page 13 Art Unit: 2159 Application/Control Number: 18/239,759 Page 14 Art Unit: 2159 Application/Control Number: 18/239,759 Page 15 Art Unit: 2159 Application/Control Number: 18/239,759 Page 16 Art Unit: 2159 Application/Control Number: 18/239,759 Page 17 Art Unit: 2159
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
Mar 16, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579099
CONTROL LEVEL TAGGING METHOD AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12561288
METHOD AND APPARATUS TO VERIFY FILE METADATA IN A DEDUPLICATION FILESYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554681
SYSTEM AND METHOD OF UNDOING DATA BASED ON DATA FLOW MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12541502
METHODS AND APPARATUSES FOR IMPROVING PROCESSING EFFICIENCY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12530365
SYSTEMS AND METHODS FOR A MACHINE LEARNING FRAMEWORK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+34.6%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month