Prosecution Insights
Last updated: April 19, 2026
Application No. 18/185,636

NEURAL NETWORK OPTIMIZATION USING KNOWLEDGE REPRESENTATIONS

Non-Final OA §103§112
Filed
Mar 17, 2023
Examiner
MOBIN, HASANUL
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Latent AI Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
506 granted / 675 resolved
+20.0% vs TC avg
Strong +39% interview lift
Without
With
+39.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
16 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 675 resolved cases

Office Action

§103 §112
DETAILED ACTION Remarks The instant application having Application Number 18/185,636 filed on March 17, 2023 has a total of 20 claims pending in the application; there are 5 independent claims and 15 dependent claims, all of which are presented for examination by the examiner. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The examiner requests, in response to this Office action, support are shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Drawings The applicant’s drawings submitted are acceptable for examination purposes. Claim Objections Claim 3 is objected to because of the following informalities: claim 3 uses “and/or”. Should Examiner use it as "and" or "or”. Applicant needs to change the claim to the appropriate choice. Appropriate correction is required. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 15-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim limitations “an analytics module” recited in claim 15 respectively invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 11-12 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Acharya et al. (US Patent Publication No. 2023/0064500 A1, ‘Acharya’, hereafter) in view of Wang et al. (US Patent Publication No. 11,979,876, ‘Wang’, hereafter) and further in view of Aghdasi et al. (US Patent Publication No. 2021/0089921 A1, ‘Aghdasi’, hereafter). Regarding claim 1. Acharya teaches a method of training and optimizing a machine-learning model (Acharya [0007-0010], [0035], [0044]), the method comprising: selecting a machine-learning model for optimization (Acharya [0007-0010], [0035], [0044]); generating a set of derived variants of the machine-learning model (Acharya [0007-0010], [0035], [0044-0046]); evaluating the set of derived variants for latency within a target hardware architecture to identify one or more derived variants that satisfy a latency criterion (Acharya [0039]); Acharya do not teach for each of the derived variants: compiling the derived variant to produce a runtime artifact; However, Wang teaches for each of the derived variants: compiling the derived variant to produce a runtime artifact (Wang, Col 4, lines 29-35); Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Acharya and Wang before him/her, to modify Acharya with the teaching of Wang’s unified optimization for convolutional neural network model inference on integrated graphics processing units. One would have been motivated to do so for the benefit of providing a model compilation system optimizes CNN models using optimized vision-specific operators as well as both graph-level tuning and tensor-level tuning to explore the optimization space for achieving heightened performance (Wang, Abstract). Acharya and Wang do not teach for each of the derived variants: quantizing numerical parameters within the derived variant; training only the one or more variants; and evaluating the one or more trained variants for accuracy. However, Aghdasi teaches for each of the derived variants: quantizing numerical parameters within the derived variant (Aghdasi [0023-0024]); training only the one or more variants (Aghdasi [0033]); and evaluating the one or more trained variants for accuracy (Aghdasi [0033]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Acharya, Wang and Aghdasi before him/her, to further modify Acharya with the teaching of Aghdasi’s transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. One would have been motivated to do so for the benefit of providing a set of pre-trained neural networks, or other such models or networks useful for machine learning and artificial intelligence. A user or other entity can obtain one or more of these pre-trained models, and further train them to be able to make inferences for one or more additional classes or types of input data. These models can be pruned and optimized for these specific inferencing tasks, enabling them to be highly accurate, relatively lightweight, and quick in inferencing. The ability to take already-trained models and adapt or further train them for a specific inferencing task can greatly simplify the training process for an end user or entity charged with providing machine learning for that task (Aghdasi, Abstract, [0020]). Regarding claim 2. Acharya as modified teaches, wherein said generating comprises: for each of the derived variants, modifying a structure of the machine-learning model in a manner different from other derived variants (Acharya [0053]). Regarding claim 3. Acharya as modified teaches, further comprising: modifying quantization and/or compilation parameters for a target hardware architecture based on results of the evaluation for latency and/or the evaluation for accuracy (Aghdasi [0023-0024], [0029], [0022], [0035]). Regarding claim 4. Acharya as modified teaches, wherein the evaluation for latency tests each of the derived variants regarding one or more of: inference speed; storage size; power; and memory bandwidth (Aghdasi [0050-0051], [0055]). Regarding claim 5. Acharya as modified teaches, wherein the evaluation for accuracy tests each of the one or more trained variants regarding accuracy and at least one of: t raining time (Aghdasi [0045-0046]); and depth and/or width of a configuration of the trained variant (Aghdasi [0046]). Regarding claim 6. Acharya as modified teaches, further comprising: after evaluating the derived variants for latency, storing results of the latency evaluation for the one or more trained variants that satisfy the latency criterion (Acharya [0039]); and based on a stored result for a given trained variant, predicting a result of the accuracy evaluation of the given trained variant prior to performing the accuracy evaluation (Aghdasi [0033]). Regarding claim 11. Acharya teaches a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform a method of training and optimizing a machine-learning model (The computer program can be stored in a non-transitory computer readable medium and executed by one or more processors, Acharya [0008]. Processor(s) can execute a method or instructions from a computer readable medium that involve training each of a plurality of machine learning models, Acharya [0080], [0086-0089], [0097-0098]), the method comprising: although claim 11 directed to a medium, it is similar in scope to claim 1. The method steps of claim 1 substantially encompass the medium recited in claim 11. Therefore; claim 11 is rejected for at least the same reason as claim 1 above. Regarding claim 12. Acharya teaches an apparatus for optimizing a machine-learning model, the apparatus comprising: one or more processors (The computer program can be stored in a non-transitory computer readable medium and executed by one or more processors, Acharya [0008]. Processor(s) can execute a method or instructions from a computer readable medium that involve training each of a plurality of machine learning models, Acharya [0080], [0086-0089], [0097-0098]); a dispatch module comprising logic executed by the one or more processors to select variants of the machine-learning model for evaluation (Acharya [0007-0010], [0044-0046); Acharya do not teach a pool of embedded hardware devices for evaluating a set of variants of the machine-learning model in terms of latency; a set of graphics processing units (GPUs) for training only a subset of the set of variants that satisfy a latency criterion; latency evaluation by one or more devices within the pool of embedded hardware devices; However, Wang teaches a pool of embedded hardware devices for evaluating a set of variants of the machine-learning model in terms of latency (Wang, Col 1, line 63 – Col 2, line 6); a set of graphics processing units (GPUs) for training only a subset of the set of variants that satisfy a latency criterion (Wang, Col 6, lines 30-40); latency evaluation by one or more devices within the pool of embedded hardware devices (Wang, Col 1, line 63 – Col 2, line 6, Col 6, lines 30-53); Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Acharya and Wang before him/her, to modify Acharya with the teaching of Wang’s unified optimization for convolutional neural network model inference on integrated graphics processing units. One would have been motivated to do so for the benefit of providing a model compilation system optimizes CNN models using optimized vision-specific operators as well as both graph-level tuning and tensor-level tuning to explore the optimization space for achieving heightened performance (Wang, Abstract). Acharya and Wang do not teach a scheduler module comprising logic executed by the one or more processors to schedule each selected variant for: accuracy evaluation by one or GPUs in the set of GPUs. However, Aghdasi teaches a scheduler module comprising logic executed by the one or more processors to schedule each selected variant for: accuracy evaluation by one or GPUs in the set of GPUs (Aghdasi [0120]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Acharya, Wang and Aghdasi before him/her, to further modify Acharya with the teaching of Aghdasi’s transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. One would have been motivated to do so for the benefit of providing a set of pre-trained neural networks, or other such models or networks useful for machine learning and artificial intelligence. A user or other entity can obtain one or more of these pre-trained models, and further train them to be able to make inferences for one or more additional classes or types of input data. These models can be pruned and optimized for these specific inferencing tasks, enabling them to be highly accurate, relatively lightweight, and quick in inferencing. The ability to take already-trained models and adapt or further train them for a specific inferencing task can greatly simplify the training process for an end user or entity charged with providing machine learning for that task (Aghdasi, Abstract, [0020]). Regarding claim 14. Acharya as modified teaches, further comprising: a knowledge database configured to store the set of variants of the machine-learning model and results of each latency evaluation and each accuracy evaluation (Acharya [0039], [0044-0046]). Regarding claim 15. Acharya as modified teaches, further comprising: an analytics module configured to predict results of an accuracy evaluation of a given variant based on results of the latency evaluation of the given variant; wherein the results of the latency evaluation include a speed, size, and power of the given variant (Aghdasi [0029], [0033], [0035]). Regarding claim 16. Acharya as modified teaches, wherein the analytics module comprises: one or more knowledge graphs, wherein each knowledge graph pertains to a variant of the machine-learning model (Wang, Col 12, line 51 – Col 3, line 7 and Fig. 5) and comprises: nodes representing evaluations of the variant (Wang, Col 13, lines 8-27); and connections between nodes representing relationships between the evaluations represented by the connected nodes (Wang, Col 13, lines 8-27); and weighted connections between two or more knowledge graphs that correspond to correlations between the connected knowledge graphs (Wang, Col 13, lines 8-27). Claims 7-9 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US Patent Publication No. 11,979,876, ‘Wang’, hereafter) and further in view of Aghdasi et al. (US Patent Publication No. 2021/0089921 A1, ‘Aghdasi’, hereafter) and further in view of Siravara et al. (US Patent Publication No. 2020/00364612 A1, ‘Siravara’, hereafter). Regarding claim 7. Wang teaches a method of generating a machine-learning runtime, the method comprising: compiling a machine-learning model into an optimized inference runtime (Wang, Col 4, lines 29-35); Wang does not teach selecting one or more software functions from a software library; linking the selected software functions; However, Aghdasi teaches selecting one or more software functions from a software library (Aghdasi [0063-0064], [0066]); linking the selected software functions (Aghdasi [0063-0064], [0066]); Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Wang and Aghdasi before him/her, to further modify Wang with the teaching of Aghdasi’s transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. One would have been motivated to do so for the benefit of providing a set of pre-trained neural networks, or other such models or networks useful for machine learning and artificial intelligence. A user or other entity can obtain one or more of these pre-trained models, and further train them to be able to make inferences for one or more additional classes or types of input data. These models can be pruned and optimized for these specific inferencing tasks, enabling them to be highly accurate, relatively lightweight, and quick in inferencing. The ability to take already-trained models and adapt or further train them for a specific inferencing task can greatly simplify the training process for an end user or entity charged with providing machine learning for that task (Aghdasi, Abstract, [0020]). Wang and Aghdasi do not teach generating a single runtime engine comprising the optimized inference runtime and the linked software functions. However, Siravara teaches generating a single runtime engine comprising the optimized inference runtime and the linked software functions (Siravara [0011], [0020], [0040], [0044]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Wang, Aghdasi and Siravara before him/her, to further modify Wang with the teaching of Siravara’s systems, devices, computer program products, apparatus, and methods that are used for verifying integrity of machine learning models. One would have been motivated to do so for the benefit of providing efficient way to verify integrity of machine learning models (Siravara, Abstract, [0001], [0011]). Regarding claim 8. Wang as modified teaches, wherein the one or more software functions include: a cyclic redundancy check algorithm to determine an integrity of the machine-learning model (Siravara [0091-0092]). Regarding claim 9. Wang as modified teaches, further comprising: quantizing the machine-learning model; and inserting a watermark signature into the optimized inference runtime (Siravara [0028], [40], [0091]). Regarding claim 17. Wang teaches a system for generating a machine-learning runtime, the system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors (FIG. 5 is a flow diagram illustrating operations 500 of a method for optimizing convolutional neural network models for inference using integrated graphics processing units according to some embodiments. Some or all of the operations 500 (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory, Wang, Col 12, lines 54-67), cause the system to: although claim 17 directed to a system, it is similar in scope to claim 7. The method steps of claim 7 substantially encompass the system recited in claim 17. Therefore; claim 17 is rejected for at least the same reason as claim 7 above. Regarding claims 18-19, the method steps of claims 8-9 substantially encompass the system recited in claims 18-19. Therefore, claims 18-19 are rejected for at least the same reason as claims 8-9 above. Claims 10, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US Patent Publication No. 11,979,876, ‘Wang’, hereafter) and further in view of Aghdasi et al. (US Patent Publication No. 2021/0089921 A1, ‘Aghdasi’, hereafter) and further in view of Liu et al. (Chinese Patent Publication No. CN 114365123 A, Published 2020-08-26, ‘Liu’, hereafter). Regarding claim 10. Acharya and Aghdasi do not teach, wherein the watermark signature is selected based on: a bit precision of the quantized machine-learning model; and a latency of the machine-learning model when executed on a target hardware platform; However, Liu teaches wherein the watermark signature is selected based on: a bit precision of the quantized machine-learning model; and a latency of the machine-learning model when executed on a target hardware platform (Liu, page 15, lines 5-13 and 31-35). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Wang, Aghdasi and Siravara and Liu before him/her, to further modify Wang with the teaching of Liu’s video up-sampling using one or more neural networks. One would have been motivated to do so for the benefit of providing training a neural network in accordance with various new techniques (Liu, Abstract, page 2, lines 20-22). Regarding claim 13. Acharya as modified teaches wherein a given variant of the machine-learning model can be evaluated by executing the given variant on different combinations of embedded hardware devices (Liu, page 11, lines 2-25). Regarding claim 20. Acharya as modified teaches, wherein the watermark signature is selected based on: a bit precision of the quantized machine-learning model; and a latency of the machine-learning model when executed on a target hardware platform (Liu, page 15, lines 5-13 and 31-35). Conclusion The prior art made of record, listed on form PTO-892, and not relied upon, if any, is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASANUL MOBIN whose telephone number is (571)270-1289. The examiner can normally be reached on 9AM to 6:00PM EST M-F. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASANUL MOBIN/ Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Mar 17, 2023
Application Filed
Dec 04, 2025
Non-Final Rejection — §103, §112
Mar 20, 2026
Interview Requested
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602398
SYNCHRONIZING STATE IN LARGE-SCALE DISTRIBUTION SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602390
DATA ANALYSIS SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12591542
DIRECTORY METADATA OPERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12585668
EFFICIENT STATE SYNCHRONIZATION IN A CLUSTERED ENVIRONMENT USING COMPACTED KEY/TUPLE REPRESENTATIONS AND SNAPSHOT-BASED STATE RESTORATION
2y 5m to grant Granted Mar 24, 2026
Patent 12572504
DATA ORGANIZER OPTIMIZING RECONCILIATION SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+39.0%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 675 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month