Prosecution Insights
Last updated: April 19, 2026
Application No. 18/184,379

METHOD AND DEVICE FOR DETERMINING AN OPTIMAL ARCHITECTURE OF A NEURAL NETWORK

Non-Final OA §101§103§112
Filed
Mar 15, 2023
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This non-final office action is in response to the application filed 15 March 2023. Claims 1-10 are pending. Claims 1, 8, and 10 are independent claims. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed on 23 June 2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on 15 March 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The examiner accepts the drawings filed 15 March 2023. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “several times” in independent claims 1 (line 22), claim 8 (line 22), and claim 10 (line 24) is a relative term which renders the claim indefinite. The term “several times” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For the purpose of examination, the examiner will interpret the term “several times” as though it reads “two or more times”. Claims 2-7 and 9 fail to cure the deficiencies of independent claims 1 and 8, respectively. Claims 2-7 and 9 are similarly rejected. Further, claims independent 1, 8, and 10 recite the limitation “wherein parent levels of each hierarchy define at least one rule, according to which child levels can be combined with one another (claim 1, lines 9-11; claim 8, lines 9-11; claim 10, lines 11-13).” The structure of the claimed tree is unclear from claim language. Specifically, the claim recites a “lowest level” defining a plurality of operations. Further, the claim recites “parent levels” and “child levels,” but it is unclear whether these levels are disposed above the “lowest levels” or if the claimed “child levels” may also be “lowest levels.” Additionally, if the claimed “child levels” are disposed higher in the hierarchy than the lowest levels, these claimed “child levels” would be parent levels to the “lowest levels.” In this instance it is unclear whether these “child levels” which can be combined together also define at least one rule because they are simultaneously “parent levels.” For these reasons, claims 1, 8, and 10 are indefinite. Claims 2-7 and 9 fail to cure the deficiencies of independent claims 1 and 8, respectively. Claims 2-7 and 9 are similarly rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. As per independent claim 8, the claims recites a “device” configured to perform a method. The claimed device fails to recite any hardware components, such as a processor or a non-transitory computer readable storage medium. Therefore, the claimed “device” appears to consist entirely of software components. Such devices constitute a software system, and fail to define a process, machine, manufacture, or composition of matter. For this reason, claim 8 is non-statutory. With respect to dependent claim 9, the claim fails to cure the deficiencies of independent claim 8. Claim 9 is similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4, and 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over Hua et al. (US 11087201, patented 10 August 2021, hereafter Hua) and further in view of Ru et al. (Interpretable Neural Architecture Search Via Bayesian Optimisation with Weisfeiler-Lehman Kernels, published 19 February 2021, hereafter Ru), and further in view of Liu (Hierarchical Representations For Efficient Architecture Search, published 22 February 2018, hereafter Liu), and further in view of Moloney et al. (WO 2020/092810, published 7 May 2020, hereafter Moloney). As per independent claim 1, Hua discloses a method for determining an optimal architecture of a neural network for a given data set including training data and validation data, the method comprising the following steps: defining a search space which characterizes possible architecture of the neural network (column 3, lines 25-44: Here, a search space is defined by the neural architecture search) drawing a plurality of candidate architectures (column 4, lines 53-63: Here, as part of determining the architecture for the cell, which is a fully convolutional neural network configured to receive cell input and generate an output (column 3, lines 45-47), the neural architecture search (column 3, lines 19-24) obtains data that specifies a current set of candidate cells/CNN. This includes obtaining cells from a previous iteration and adding a respective one or more operation blocks to each of the previous cells) training neural networks with the candidate architecture on the training data, and validating the trained neural networks on the validation data (column 5, lines 42-60: Here, the training engine receives data for training the instances in a variety of ways. This is performed by receiving training data for training instances on the particular machine learning task and a validation set for evaluating the performance of the trained instances of the task neural network on the particular machine learning task) adapting the process such that given the candidate architectures, the process predicts the validation achieved with the candidate architectures (column 4, line 64- column 5, line 2: Here, based on receiving a candidate cell in the current set, the predictor receives data specifying the cell and processes the data using the performance prediction neural network in accordance with the current values of the prediction parameters go generate a performance prediction for each candidate cell) repeating steps i-iii several times (column 6, lines 61-64): determining a next candidate architecture to be evaluated depending on an acquisition function (column 5, lines 3-12: Here, the neural architecture search (NAS) generates an updated set of cells (candidates) to be evaluated and prunes candidates based upon performance prediction) training a further neural network with the candidate architecture to be evaluated on the training data, and validating the further, trained neural network on the validation data (column 5, lines 42-60: Here, the training engine receives data for training the instances in a variety of ways. This is performed by receiving training data for training instances on the particular machine learning task and a validation set for evaluating the performance of the trained instances of the task neural network on the particular machine learning task) adapting the process such that given previously used candidate architectures, the process predicts the validation achieved with the previously used candidate architectures (column 6, lines 4-25: Here, the predictor generates a performance prediction for each of the candidates within the current set of the NAS. These predictions are based upon the small number of trained cells (previously used candidate architectures)) outputting the candidate architecture that achieved a best performance on the validation data (column 6, line 65- column 7, line 3: Here, the candidate cell that hast e best performance as the output cell is determined) Hua fails to specifically disclose: defining a search space which characterizes possible architecture of the neural network using a context-free grammar wherein the context-free grammar characterizes a plurality of hierarchies of levels wherein a lowest level of each hierarchy defines a plurality of operations wherein parent levels of each hierarchy define at least one rule, according to which child levels can be combined with one another randomly drawing a candidate architecture according to the context-free grammar initializing a Gaussian process, wherein the Gaussian process includes a Weisfeiler-Lehman graph kernel an acquisition function that depends on the Gaussian process, wherein the acquisition function is optimized using an evolutionary algorithm However, Ru, which is analogous to the claimed invention because it is directed toward neural architecture search, discloses: randomly drawing a candidate architecture (page 3, paragraph 1: Here, for architecture generation, new candidates are selected via random sampling) initializing a Gaussian process, wherein the Gaussian process includes a Weisfeiler-Lehman graph kernel (page 1, paragraph 1; page 3, paragraph 1: Here, a neural architecture search combining a Gaussian process with a Weisfeiler-Leman subtree graph kernel (GPWL) is used for performing searching within the neural architecture space to optimize generation/selection of candidate architectures at each iteration) an acquisition function that depends on the Gaussian process, wherein the acquisition function is optimized using an evolutionary algorithm (page 2, paragraph 6; page 3, paragraph 2 – page 4, paragraph 2: Here, a Gaussian process is used to compare candidates and perform iterations until an optimized selection of features is achieved for the NAS) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ru with Hua, with a reasonable expectation of success, as it would have allowed for implementing a scalable model to achieving better prediction performance (Ru: page 2, paragraph 2). Additionally, Liu, which is analogous to the claimed invention because it is directed toward hierarchical representations for architecture search, discloses: a plurality of hierarchies of levels (page 3, Section 2.2: Here, a hierarchy is described in which the lowest level of the hierarchy L are a set of primitive operations and the highest level l contains a single motif corresponding to the full architecture) wherein a lowest level of each hierarchy defines a plurality of operations (page 3, Section 2.2: Here, a hierarchy is described in which the lowest level of the hierarchy L are a set of primitive operations and the highest level l contains a single motif corresponding to the full architecture) wherein parent levels of each hierarchy define at least one rule, according to which child levels can be combined with one another (page 3, Section 2.2: Here, a hierarchy is described in which the lowest level of the hierarchy L are a set of primitive operations and the highest level l contains a single motif corresponding to the full architecture. In between the lowest and highest levels are additional levels of motifs used for defining the highest level motif) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Liu with Hua-Ru, with a reasonable expectation of success, as it would have allowed for improved search using a hierarchical structure to implement hierarchical modules able to process changes across the motif and propagate them across the whole network immediately (Liu: page 2, paragraph 1). Finally, Moloney, which is analogous to the claimed invention because it is directed toward searching a neural network architecture, discloses using a context-free grammar to search generate a query (paragraph 0027: Here, the neural network generator system utilizes context free grammars to facilitate evolution of parameters associated with specific layers and optimize the process). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Komenda with Hua-Ru-Liu, with a reasonable expectation of success, as it would have allowed for improvement of the neural network using derivations and evolutions facilitated by the context-free grammar (Moloney: paragraph 0027). As per dependent claim 4, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Ru discloses wherein the acquisition function is a guided acquisition function, wherein the acquisition function is evaluated using a guided evolutionary algorithm (page 3, paragraph 1: Here, a mutation algorithm is used to generate new candidates via random sampling. This process is guided via a Gaussian process is used to compare candidates and perform iterations until an optimized selection of features is achieved for the NAS((page 2, paragraph 6; page 3, paragraph 2 – page 4, paragraph 2)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ru with Hua, with a reasonable expectation of success, as it would have allowed for implementing a scalable model to achieving better prediction performance (Ru: page 2, paragraph 2). Hua fails to specifically disclose the use of a grammar. However, Moloney, which is analogous to the claimed invention because it is directed toward searching a neural network architecture, discloses using a context-free grammar to search generate a query (paragraph 0027: Here, the neural network generator system utilizes context free grammars to facilitate evolution of parameters associated with specific layers and optimize the process). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Komenda with Hua-Ru-Liu, with a reasonable expectation of success, as it would have allowed for improvement of the neural network using derivations and evolutions facilitated by the context-free grammar (Moloney: paragraph 0027). As per dependent claim 6, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Liu further discloses wherein the context-free grammar additionally includes secondary conditions that characterize properties of the architecture (page 3, Section 2.2: Here, a hierarchy is described in which the lowest level of the hierarchy L are a set of primitive operations and the highest level l contains a single motif corresponding to the full architecture. In between the lowest and highest levels are additional levels of motifs used for defining the highest level motif. These motifs characterize the properties of the architecture) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Liu with Hua-Ru, with a reasonable expectation of success, as it would have allowed for improved search using a hierarchical structure to implement hierarchical modules able to process changes across the motif and propagate them across the whole network immediately (Liu: page 2, paragraph 1). As per dependent claim 7, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Hua further discloses wherein the input variables are image and the machine learning system is an image classifier (column 2, lines 54-64). With respect to independent claim 8, the claim recites the limitations substantially similar to those in claim 1. Claim 8 is rejected under similar rationale. Additionally, Hua discloses a device (column 11, lines 37-40). As per dependent claim 9, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 8, and the same rejection is incorporated herein. Hua discloses wherein the device is a training device (column 9, lines 33-34: Here, the system trains the instance to perform the particular learning task. Further, the system may include devices such as a clients and servers (column 12, lines 44-55)). With respect to independent claim 10, the claim recites the limitations substantially similar to those in claim 1. Claim 10 is rejected under similar rationale. Additionally, Hua discloses a non-transitory machine-readable storage medium (column 11, lines 58-64). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Hua, Ru, Liu, and Moloney and further in view of Tan et al. (WO 2021/170215, published 2 September 2021, hereafter Tan). As per dependent claim 2, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Moloney discloses use of a context free grammar (paragraph 0027). Additionally, Liu further discloses wherein the evolutionary algorithm applies a mutation, wherein the mutation and crossover are applied to a syntax tree characterizing the candidate architecture, wherein a new syntax tree obtained by the mutation is tested (page 5, paragraphs 2-3: Here, an evolutionary search algorithm picks a promising genotype from the population and places mutated offspring back into the population. Repeating this operation increases the quality of the population via refinement over a period of time with the genotype with the highest fitness selected as the final output over a period of time). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Liu with Hua-Ru-Liu-Moloney, with a reasonable expectation of success, as it would have allowed for improved search using a hierarchical structure to utilize mutated offspring to improve fitness selection over a period of time (Liu: page 5, paragraphs 2-3). Hua fails to specifically disclose wherein the evolutionary algorithm applies a crossover, wherein the crossover are applied to a syntax tree characterizing the candidate architecture, wherein a new syntax tree obtained by the crossover is tested. However, Tan, which is analogous to the claimed invention because it is directed toward neural architecture search, discloses wherein the evolutionary algorithm applies a crossover, wherein the crossover are applied to a syntax tree characterizing the candidate architecture, wherein a new syntax tree obtained by the crossover is tested (paragraph 0051-0053: Here, a genetic mutation operator and a cross-over are used to construct a new network editing tree (syntax tree). These trees are tested, and the most fit tree in the final generation provides the searched neural architecture). It would have been obvious to one of ordinary skill in the art at the time of he applicant’s effective filing date to have combined Tan with Hua-Ru-Liu-Moloney, with a reasonable expectation of success, as it would have allowed for selecting a searched neural architecture by implementing mutations and cross-overs to diversify the population and improve the selection (Tan: paragraph 0051). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Hua, Ru, Liu, and Moloney and further in view of Pal et al. (Self-crossover – a new genetic operator and its application to feature selection, 1998, hereafter Pal). As per dependent claim 3, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Moloney discloses use of a context free grammar (paragraph 0027). Additionally, Liu further discloses wherein the evolutionary algorithm applies a mutation, wherein the mutation and crossover are applied to a syntax tree characterizing the candidate architecture, wherein a new syntax tree obtained by the mutation is tested (page 5, paragraphs 2-3: Here, an evolutionary search algorithm picks a promising genotype from the population and places mutated offspring back into the population. Repeating this operation increases the quality of the population via refinement over a period of time with the genotype with the highest fitness selected as the final output over a period of time). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Liu with Hua-Ru-Liu-Moloney, with a reasonable expectation of success, as it would have allowed for improved search using a hierarchical structure to utilize mutated offspring to improve fitness selection over a period of time (Liu: page 5, paragraphs 2-3). Hua fails to specifically disclose wherein the evolutionary algorithm applies a self-crossover, wherein the self-crossover are applied to a syntax tree characterizing the architecture, wherein a new syntax tree obtained by the crossover is tested, wherein the self-crossover is carried out randomly, wherein with the self-crossover, branches are swapped in the tree. However, Pal, which is analogous to the claimed invention because it is directed toward neural architecture search, discloses wherein the evolutionary algorithm applies a self-crossover, wherein the self-crossover are applied to a syntax tree characterizing the architecture, wherein a new syntax obtained by the self-crossover is tested, wherein the self-crossover is carried out randomly, wherein with the self-crossover, branches are swapped (Section 4: Here, a self-crossover is applied by selecting a random position within a string identifying a splitting position. The string is then split into two substrings and two additional random positions are determined for splitting the substrings. These four substrings are recombined as the first and fourth substrings and the third and second substrings. This constitutes swapping branches of the substrings. Further, self-crossover are one mutation operation used in optimization algorithms (Section 2)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Pal with Hua-Ru-Liu-Moloney, with a reasonable expectation of success, as it would have allowed for applying mutations where the genetic information within a single string/node is selected randomly from the pool of candidates to produce mutated offspring (Pal: Section 4, paragraph 1). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Hua, Ru, Liu, and Moloney and further in view of Elliot et al. (US 2021/0142525, published 13 May 2021, hereafter Elliot). As per dependent claim 5, Hua, Ru, Liu, and Moloney disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Liu discloses wherein a lowest level of each hierarchy defines a plurality of operations (page 3, Section 2.2: Here, a hierarchy is described in which the lowest level of the hierarchy L are a set of primitive operations and the highest level l contains a single motif corresponding to the full architecture). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Liu with Hua-Ru, with a reasonable expectation of success, as it would have allowed for improved search using a hierarchical structure to implement hierarchical modules able to process changes across the motif and propagate them across the whole network immediately (Liu: page 2, paragraph 1). Hua fails to specifically disclose wherein the operation is a down sampling operation. However, Elliot, which is analogous to the claimed invention because it is directed toward a neural network having a plurality layers implementing operations, discloses a down sampling operation (paragraph 0118: Here, a neural network may comprises a plurality of layers of operations, including a down sampling operation layer). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Elliot with Hua-Ru-Liu-Moloney, with a reasonable expectation of success, as it would have allowed for performing down sampling operations within a neural network (Elliot: paragraph 0118). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Mazzawi et al. (US 2021/0019599): Discloses adaptive neural architecture search trained on mutated architecture (Abstract). Cao et al. (US 12399892): Discloses a context free grammar system architecture for generating a natural language query (claims 1 and 6). Gesmundo (US 11544536): Discloses a hybrid neural architecture search and generating new candidate architecture from selected candidate architecture for each hyperparameter (Abstract). Dohan et al. (US 10997503): Discloses an efficient neural network architecture search that selects a candidate architecture and generating a new architecture to use in training the neural network until a termination criteria (Abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Mar 15, 2023
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month