Prosecution Insights
Last updated: April 19, 2026
Application No. 19/292,294

Method and System for Multi-Level Artificial Intelligence Supercomputer Design

Final Rejection §102
Filed
Aug 06, 2025
Examiner
PHAM, THIERRY L
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Vijay Madisetti
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
85%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
569 granted / 705 resolved
+18.7% vs TC avg
Minimal +5% lift
Without
With
+4.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
717
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status ● The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . ● This action is responsive to the following communication: an amendment filed on 2/12/2026. ● Claims 1-3, 5-8, 10-13, 15 are currently pending; claims 4, 9, and 14 have been canceled. Response to Arguments ● Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 5-8, 10-13, 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gray et al (US 11769017). Regarding claim 1, Gray discloses a method for in-memory processing of h-LLM data comprising: receiving an input data stream (receive a query, fig. 2); operating a data receiver operable to divide the input data stream into a plurality of data batches (divide/separate an inputted query into batches, figs.2,3, and to process said batches in parallel for fasting processing, pars. 121, 122) by aggregating the data over aggregation period (duration of time, par. 55) during tied to a fixed scheduled for batch processing; processing the plurality of data batches using a processing layer (processing batches using different family of LLM architectures/layers/models, pars. 121-122), the processing layer comprising a plurality of h-LLMs operating at least partially in volatile memory (e.g. Random Access Memory, par. 131), each h-LLM of the plurality of the h-LLM being trained with different training set (training data, col. 3, lines 20-35), the processing layer being configured to process the plurality of data batches in parallel (parallel processing, pars. 121-122) using the plurality of h-LLMs; and producing a plurality of processed data batches (outputting processed data from batches, figs. 2-5, pars. 121-122) from an output of the processing layer. Regarding claim 2, Gray further discloses the method of claim 1 wherein the volatile memory comprises at least one of random-access memory (RAM) devices (RAM 830, par. 131), static random-access memory (SRAM) devices, dynamic random-access memory (DRAM) devices, magnetoresistive random- access memory (MRAM) devices, and non-volatile random-access memory (NVRAM) devices. Regarding claim 3, Gray further discloses the method of claim 1 wherein the volatile memory consists of one of random-access memory (RAM) devices (RAM 830, par. 131), static random-access memory (SRAM) devices, dynamic random-access memory (DRAM) devices, magnetoresistive random- access memory (MRAM) devices, and non-volatile random-access memory (NVRAM) devices. Regarding claim 5, Gray further discloses the method of claim 1 wherein the processing is performed entirely within the volatile memory (RAM 830, par. 131). Regarding claims 6-8, 10-13, 15 recite limitations that are similar and in the same scope of invention as to those in claims 1-3, 5 above; therefore, claims 6-8, 10-13, 15 are rejected for the same rejection rationale/basis as described in claims 1-3, 5. Response to Arguments ● Applicant's arguments filed 2/12/2026 have been fully considered but they are not persuasive. ---Regarding claim 1, the applicant argued the cited prior art of record fails to teach and/or suggest “each h-LLM of the plurality of h-LLM being trained with a different training set”. In response, the examiner herein fully disagrees. The examiner herein notes the claimed invention cites “being trained with different training set” is not the same as “being trained with different training dataset”. Gray clearly teaches each LLM of plurality of LLMs are being trained with different training set. See below for details. (122) In some versions of those implementations, one or more of the LLMs that are utilized in parallel can be truly different from other of the LLM(s). For example, a first of the LLMs can be trained and/or fine-tuned differently than a second of the LLMs. Also, for example, each of the LLMs can be trained and/or fine-tuned differently than all other of the LLMs. As another example, a first of the LLMs can have a first architecture that differs from a second architecture of a second of the LLMs. In some additional or alternative versions of those implementations, two or more (e.g., all) of the LLMs that are utilized in parallel are the same (e.g., architecturally, training, and/or fine-tuning wise), but different content is processed among the two or more LLMs. For example, first search result document(s) can be processed using a first LLM and second search result document(s) can be processed using a second LLM. As another example, a first subset of content from first search result document(s) can be processed using a first LLM and a second subset of content from the first search result document(s) can be processed using a second LLM. As yet another example, a first prompt can be processed (along with additional content) using a first LLM and a second prompt can be processed (optionally along with the same additional content) using a second LLM. Utilizing multiple LLMs in parallel for a given query, while optionally selecting a candidate NL based summary from only one, can mitigate occurrences of the selected candidate NL based summary being difficult to parse, inaccurate, or otherwise not resonating with a user. Put another way, running multiple LLMs in parallel can leverage that different LLMs may perform better in some situations than others, and enables utilizing output from the LLM that is best suited for the current situation. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THIERRY L PHAM whose telephone number is (571)272-7439. The examiner can normally be reached M-F, 11-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THIERRY L PHAM/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Aug 06, 2025
Application Filed
Nov 25, 2025
Non-Final Rejection — §102
Jan 14, 2026
Interview Requested
Feb 03, 2026
Applicant Interview (Telephonic)
Feb 04, 2026
Examiner Interview Summary
Feb 12, 2026
Response Filed
Mar 18, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586585
SPEECH RECOGNITION APPARATUS, CONTROL METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585891
NATURAL LANGUAGE GENERATION USING KNOWLEDGE GRAPH INCORPORATING TEXTUAL SUMMARIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579376
LABEL PROPAGATION USING CONTRASTIVE LEARNING PROJECTIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12554941
PROCESSING EVENT DATA AND/OR TABULAR DATA FOR INPUT TO ONE OR MORE MACHINE LEARNING MODELS
2y 5m to grant Granted Feb 17, 2026
Patent 12547648
LANGUAGE MODEL DECODING FOR SEARCH QUERY COMPLETION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
85%
With Interview (+4.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month