Prosecution Insights
Last updated: April 19, 2026
Application No. 18/969,597

QUERY ANSWERING METHOD BASED ON LARGE MODEL, ELECTRONIC DEVICE, STORAGE MEDIUM, AND INTELLIGENT AGENT

Non-Final OA §101§103
Filed
Dec 05, 2024
Examiner
HOANG, SON T
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
754 granted / 905 resolved
+28.3% vs TC avg
Strong +35% interview lift
Without
With
+35.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
926
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 905 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status This instant application No. 18/969,597 has claims 1-20 pending. Priority / Filing Date Applicant’s claim for priority of foreign application No. CN202411132084.4 (filed on August 16, 2024) is acknowledged. The effective filing date for this application is August 16, 2024. Abstract The abstract of the disclosure is objected due to the use of implied language. Note that in the abstract, the language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc… See MPEP § 608.01(b). Note that in the abstract, Applicant recites “A query answering method, an electronic device, a storage medium, and an intelligent agent are provided, which relate…” on line 1. This citation clearly provokes the use of implied language and repeats the title. Revision and/or correction are required. One example is as follows: “A method relates to a field…” Drawings The drawings filed on December 5, 2024 are acceptable for examination purposes. Information Disclosure Statement As required by M.P.E.P. 609(C), the Applicant’s submissions of the Information Disclosure Statements filed on August 4, 2025 and November 13, 2025 are acknowledged by the Examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P. 609 C(2), a copy of the PTOL-1449 initialed and dated by the Examiner is attached to the instant Office action Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to nonstatutory subject matters. Regarding claim 20, an intelligent agent is being recited in claim. However, this claimed component can be interpreted by a person of ordinary skills in the art as a software module that carries out the claimed functions. Furthermore, in accordance with Applicant’s specification ([0207]-[0208] of instant specification), Applicant states the component can be implemented by software consisting of data structures and computer programs, which impart functionality when employed as a computer component. As such, the claim is not limited to statutory subject matter and is therefore non-statutory. The claimed invention in claims 1-19 are directed to a judicial exception (i.e., an abstract idea) without significantly more. Claims 1-19 pass step 1 of the 35 U.S.C. 101 analysis since each claim is directed to a method, an electronic device comprising at least one processor and a memory (i.e., hardware components per [0207]-[0209] of instant specification and as known in the art), a non-transitory computer-readable storage medium. Claims 1, 18, and 19 recites, in part, elements that are directed to an abstract idea (“Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind.” Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015)). Each claim recites the limitations of processing…a current text…to obtain a processed text…based on a task execution order…; and obtaining…an answer to the query based on the processed text in each claim. The limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components (e.g., mentally processing the current text based on a task execution order; and mentally retrieving an answer based on the processed text). That is, other than reciting generic components (e.g., a large model, processor, memory, and/or executable instructions), nothing in the claim precludes the limitations from being performed in the human mind per step 2A – prong 1 of the Abstract Idea Analysis. Thus, the limitations are parts of a mental process since they do not provide any technical details on the architecture of the large model itself. Simply reciting that an AI model performs the work does not make an abstract idea a technical improvement. The limitations describe what the system does, i.e., obtaining an answer, rather than a specific technical of how that improves the computer’s functionality. Further, the claims recite additional step of inputting…the retrieval content set and prompt information…into the large model which is an extra-solution activity (per step 2A – prong 2 of the Abstract Idea Analysis) that cannot be integrated into a practical application (e.g., the elements recite trivial elements that occurred or would occur after the mental process) Each of the additional limitation(s) is no more than mere instructions to apply the exception using generic computer components (e.g., AI model, processor, memory, and computer-executable instructions). Thus, the claims, per step 2A – prong 2 of the Abstract Idea Analysis, cannot be integrated into a practical application. The claims are reevaluated in step 2B to determining if each limitation is more than what is well-understood, routine, conventional activity in the field. The background of the limitations does not provide any indication that the computer components (e.g., AI model, computer processor executing instructions) are not off-the-shelf computer components. The Symantec, TLI, and OOP Techs court decisions cited in MPEP 2106.05(d)(II) indicate that mere receiving, generating, storing, determining, identifying, and transmitting of data over a network are a well-understood, routine, and conventional functions when claimed in a merely generic manner (as it is here). Accordingly, a conclusion that the claims are well-understood, routine, conventional (WURC) activity is supported under Berkheimer Option 2. For these reasons, there is no inventive concept in each claims, thus, the claims are ineligible. Claim 2 further recites steps of processing…the current text…; and performing…content processing… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally processing the current text; and mentally performing content processing). Thus, the claim is ineligible. Claim 3 further recites steps of performing…content processing…; and determining…the content-augmented processed text which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally performing content processing; and mentally determining the processed text). Thus, the claim is ineligible. Claim 4 further recites steps of performing…content processing…; performing…noise reduction…; and performing content extraction… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally performing content processing; mentally performing noise reduction; and mentally performing content extraction). Thus, the claim is ineligible. Claim 5 further recites steps of rearranging…the plurality of text segments, and generating identification information…; and rearranging a plurality of sentences..., and generating identification information… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally arranging the text segments and sentences; and mentally generating identification information). Thus, the claim is ineligible. Claim 6 further recites steps of rearranging…the plurality of sentences..., and generating identification information… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally arranging the sentences; and mentally generating identification information). Thus, the claim is ineligible. Claim 7 further recites a step of performing…summary generation… which is implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally generating summary based on current text). Thus, the claim is ineligible. Claim 8 further recites steps of processing…the current text…; and performing…structuring processing… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally processing the current text; and mentally performing structuring processing). Thus, the claim is ineligible. Claim 9 further recites steps of performing structured recognition on the current text…; and performing…format update… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally performing structure recognition; and mentally performing format update). Thus, the claim is ineligible. Claim 10 further recites a step of evaluating…the processed text… which is implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally evaluating the processed text to obtain an evaluation result). Thus, the claim is ineligible. Claim 11 merely provides a definition for the evaluation information. Thus, the claim is ineligible. Claim 12 further recites steps of filling the query and the retrieval content set into preset positions in the prompt…, and inputting the prompt information…into a large model… which are extra-solution and WURC activities similar to the above analysis (e.g., typing the query and retrieval content as a prompt into the input GUI of a LLM). Thus, the claim is ineligible. Claim 13 further recites steps of …performing…retrieval trigger analysis…; and rephrasing…the query to obtain a plurality of rephrased queries… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally performing retrieval trigger analysis and mentally rephrasing the query and writing down on paper a plurality of rephrased queries). Thus, the claim is ineligible. Claim 14 further recites a step of generating…the answer to the query… which is implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally generating the answer). Thus, the claim is ineligible. Claim 15 further recites a step of rephrasing, based on a rephrasing rule…, the query… which is implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally rephrasing the query based on a certain rule, and writing down on paper a plurality of rephrased queries). Thus, the claim is ineligible. Claim 16 further recites steps of rephrasing…the query…; and obtaining…the plurality of rephrased queries… which are implementable in a human mind and/or with the aid of pen/paper similar to the above analysis (e.g., mentally rephrasing the query based on a certain rule, and writing down on paper a plurality of rephrased queries that have better matches a condition of the certain rule). Thus, the claim is ineligible. Claim 17 further recites steps of filling the query and the retrieval content set into preset positions in the prompt…, and inputting the prompt information…into a large model… which are extra-solution and WURC activities similar to the above analysis (e.g., typing the query and retrieval content as an ordered prompt into the input GUI of a LLM). Thus, the claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 8-11, and 18-20 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Gunjal et al. (Pub. No. US 2025/0335928, filed on April 29, 2024; hereinafter Gunjal) in view of Ehrlich et al. (Pub. No. US 2025/0373487, filed on May 30, 2024; hereinafter Ehrlich). Regarding claims 1, and 18-20, Gunjal clearly shows and discloses a query answering method based on a large model (Abstract); an electronic device, comprising: at least one processor; and a memory communicatively coupled with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to implement the method; a non-transitory computer readable storage medium storing computer instructions, wherein the computer instructions are configured to cause the computer to implement the method; and an intelligent agent configured to implement the method (Figure 1) comprising: inputting, in response to a retrieval content set retrieved based on a query, the query, the retrieval content set and prompt information for answer generation into the large model (the prompter module 229 can generate a prompt using the refined query and the search results. In some examples, the prompt can include conversational history context. For example, the conversation history updater module 234 can provide at least a portion of a historical conversation that the user 218 has had with the chatbot (e.g., the last Y refined queries and LLM responses, which can be used by the prompter module 229 as conversational history context for the prompt that is to be sent to the LLM. Here, the prompt can include at least a portion of the refined query, the search results, and the conversational history context, [0056]), so that the large model performs operations of: processing, based on a current task to be executed in the prompt information for answer generation and the query, a current text corresponding to the retrieval content set to obtain a processed text (The LLM processes the prompt and provides output as a returned response to the prompt, [0057]); and obtaining, in a case of determining that the processed text meets a preset condition, an answer to the query based on the processed text (the output verification module 142 mitigates potential for LLMs to produce erroneous and/or inaccurate outputs by checking LLM responses for factual accuracy and relevance. In this manner, the output verification module 142 functions as a quality control agent, ensuring that the information provided meets standards for reliability. For example, the output verification module 142 checks for the answer relevance, context relevance and groundedness of the output, [0037]. The prompter 229 provides the output to the output verification module 230, which verifies the output of the LLM. For example, and as described herein, the output verification module 230 verifies an accuracy of the output and/or that the output is absent misinformation, [0057]). Ehrlich then discloses wherein the current task to be executed is determined based on a task execution order in the prompt information for answer generation (.the prompt 700 of FIG. 7 requests an interpretation of each rule, (e.g., “Can you please provide your interpretation of each of these 7 rules with respect to the order of the events in the antecedent occurring and why they predicted the consequent event?”). The response 800 addresses each of these rules, providing a concise interpretation, [0063]-[0064]). It would have been obvious to an ordinary person skilled in the art at the time of the invention was effectively filed to incorporate the teachings of Ehrlich with the teachings of Gunjal for the purpose of applying text-based discovery and knowledge-enhanced context capabilities of generative models to interpret and enhance responding results to an information retrieval request. Regarding claim 2, Ehrlich further discloses the current task to be executed comprises a content arrangement task (Figure 7 shows a prompt comprising a request to interpret each of the attached rule with respect to the order of the events previously occurred); and processing, based on the current task to be executed for answer generation in the prompt information for answer generation, the current text corresponding to the retrieval content set to obtain the processed text (queries/prompts to a generative model may be “careful” prompts, e.g., in which sequential rule documentation is embedded within the prompt content, and/or in which one or more sequences may be similarly embedded, or accessed from a database of N sequences and added to the prompt content, [0026]), comprises: performing, based on the content arrangement task, content processing on the current text to obtain a content-augmented processed text (the prompt 700 of FIG. 7 requests an interpretation of each rule, (e.g., “Can you please provide your interpretation of each of these 7 rules with respect to the order of the events in the antecedent occurring and why they predicted the consequent event?”). The response 800 addresses each of these rules, providing a concise interpretation, [0064]. Figure 8 shows a response based on the ordered analysis of sequential events and respective rules). Regarding claim 8, Ehrlich further discloses the current task to be executed comprises a structural arrangement task; and processing, based on the current task to be executed, the current text corresponding to the retrieval content set to obtain the processed text, comprises: performing, based on the structural arrangement task, structuring processing on the current text to obtain a structurally augmented processed text (where the prompt 910 states “Can you interpret each of the 7 rules generated in terms of network topology and how network topology affected the sequence of failed events within the sequence? Can you please include, in your interpretation of each of these 7 sequential rules with respect to network topology, the actual rule that you are discussing,” the generative model may retain prior context from the prompt 700, in particular the rules in question that are referenced in the additional prompt 910. In this case, the response 920 restates each rule followed by a concise interpretation of the respective rule, [0065]). Regarding claim 9, Ehrlich further discloses performing, based on the structural arrangement task, structuring processing on the current text to obtain the structurally augmented processed text, comprises: performing structured recognition on the current text to obtain structural recognition information for a plurality of text segments in the current text (queries/prompts to a generative model may be “careful” prompts, e.g., in which sequential rule documentation is embedded within the prompt content, and/or in which one or more sequences may be similarly embedded, or accessed from a database of N sequences and added to the prompt content, [0026]); and performing, based on a structural format in the structural arrangement task and the structural recognition information for the plurality of text segments, format update on the current text to obtain the structurally augmented processed text (the first two sequences 1 and 2 represent sequences of ordered NF failure events, separated by the delineator “,” observed in failed INITIAL_REGISTATION transactions. Similarly, sequences 3-9 represent sequences of ordered NF failure events, separated by the delineator “,” observed in failed PDU_SESSION_ESTABLISHMENT transactions. In addition, in the example of FIG. 3 each NF failure event in a sequence is represented as an “@” delineated 4-tuple. The first element of the 4-tuple represents the NF that initiated the request, the second element represents the NF receiving the request, the third element is the logical interface between the NFs, and the fourth element is the message returned by the recipient of the request (e.g., a failure event message/failure event message content), [0054]). Regarding claim 10, Gunjal further discloses evaluating, based on evaluation information in the prompt information for answer generation, the processed text to obtain an evaluation result configured to indicate whether the processed text meets the preset condition (the output verification module 142 mitigates potential for LLMs to produce erroneous and/or inaccurate outputs by checking LLM responses for factual accuracy and relevance. In this manner, the output verification module 142 functions as a quality control agent, ensuring that the information provided meets standards for reliability. For example, the output verification module 142 checks for the answer relevance, context relevance and groundedness of the output, [0037]). Regarding claim 11, Gunjal further discloses the evaluation information comprises at least one of an evaluation indicator and reference information (The prompter 229 provides the output to the output verification module 230, which verifies the output of the LLM. For example, and as described herein, the output verification module 230 verifies an accuracy of the output and/or that the output is absent misinformation, [0057]). Claims 4, and 6 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Gunjal in view of Ehrlich and further in view of Krishnamani (Pub. No. US 2025/0378283, filed on June 6, 2024). Regarding claim 4, Krishnamani then discloses the content arrangement task comprises a content extraction task; and performing, based on the content arrangement task, content processing on the current text to obtain the content-augmented processed text, comprises: performing, based on the content extraction task, noise reduction on the current text to obtain a noise reduced text (the input processor 405 may perform various types of text cleaning to remove noise (e.g., special characters, punctuation, HTML tags, stopwords) from relevant textual content. In an example involving stopwords (common words that tend to carry little semantic meaning), the input processor 405 may remove stopwords to reduce noise and focus the generative LLM 430 on more meaningful content, [0076]); and performing content extraction on the noise reduced text to obtain a plurality of hierarchically augmented text segments as the content-augmented processed text (The tokenizer 410 may segment the (e.g., processed) text into smaller units (tokens) for subsequent analysis and processing. The tokens may represent individual words, subwords, or characters, depending on the implementation. Word-based tokenization divides the text into individual words, treating each word as a separate token. Subword tokenization breaks down words into smaller meaningful units (e.g., prefixes, suffixes, stems), enabling the generative LLM 430 to understand morphological variations and handle out-of-vocabulary words more effectively, [0077]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Krishnamani with the teachings of Gunjal, as modified by Ehrlich, for the purpose of enhancing generative AI with logical reasoning based on natural language statements by checking the consistency of a set of statements such as natural language statements to provide transparency in decision making in contexts. Regarding claim 6, Krishnamani further discloses performing content extraction on the noise reduced text to obtain the plurality of hierarchically augmented text segments as the content-augmented processed text, comprises: rearranging, based on contextual relationships among a plurality of sentences obtained by splitting the noise reduced text, the plurality of sentences, and generating identification information configured to identify respective contextual relationships of the plurality of sentences, so as to obtain the hierarchically augmented processed text (assume input text such as “Who discovered gravity” is tokenized (e.g., by the tokenizer410 of FIG. 4A) into tokens such as words, and each token is encoded (e.g., by the embedding component 420 of FIG. 4A) into a corresponding embedding (e.g., of size 512). Since these token embeddings typically do not represent the position of the token in the input sequence, any known technique may be used to add a positional encoding to each token embedding to encode the sequential relationships and context of the tokens in the input sequence, [0081]-[0083]). Claim 7 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Gunjal in view of Ehrlich and further in view of Mondlock et al. (Pub. No. US 2025/0131247, filed on October 24, 2023; hereinafter Mondlock). Regarding claim 7, Mondlock then discloses performing, based on a summary generation task in the prompt information for answer generation, summary generation processing on the retrieval content set to obtain a summary set as the current text (The augmented user query may be sent to the LLM service 170 and an answer may be received from the LLM service 170 by the LLM interface module 132. The augmented user query may be generated and sent by the combining the relevant information and answering the user query block 388 of the RAG pipeline 300. The augmented user query may include each of the relevant information responses, the user query, and a prompt causing the LLM to generate an answer. The prompt may cause the LLM to generate the answer by combining each of the plurality of relevant information responses into a relevant response block and summarizing the relevant response block into an answer, [0149]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Mondlock with the teachings of Gunjal, as modified by Ehrlich, for the purpose of implementing improvements to generative artificial intelligence systems through the use of generative artificial intelligence pipelines to supply external information to pre-trained large language models for use in answering queries wherein such queries may be modified and augmented with additional relevant information from decentralized data sources. Claim 12 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Gunjal in view of Ehrlich and further in view of Vetzler et al. (Pub. No. US 2025/0371016, filed on May 30, 2024; hereinafter Vetzler). Regarding claim 12, Vetzler then discloses inputting the query, the retrieval content set and the prompt information for answer generation into the large model comprises: filling the query and the retrieval content set into preset positions in the prompt information for answer generation respectively (Figure 4 shows input to Field LLM has the order of User prompt + {Doc5, Doc1, Doc3, Doc2, Doc9}); and inputting the prompt information for answer generation into the large model, wherein the prompt information comprises a plurality of tasks added with order identifiers configured to indicate an order in which tasks are executed (Reference numeral 410 indicates the final form of the prompt sent to the field LLM by the retrieval optimization engine. The prompt includes the original user prompt, and the additional reference documents of the preferred document subset. The field LLM generates an answer, namely, “Regular exercise has numerous health benefits. It can help improve your cardiovascular health and prevent chronic diseases. It is found to be effective in controlling weight and boosting energy. Moreover, regular exercise can reduce depression and promote better sleep.”, [0078]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Vetzler with the teachings of Gunjal, as modified by Ehrlich, for the purpose of processing a user prompt and corresponding document subset by a field LLM to generate an answer based on a prompt template with ordered processing based on positions of elements in the prompt. Allowable Subject Matter Claims 3, 5, and 13-17 are objected for being dependent on a respective rejected base claim but would be allowable over the prior art if rewritten in independent form to incorporate the limitations of the respective base claim and all intervening claim(s). Relevant Prior Art The following references are deemed relevant to the claims: Zhou et al. (Pub. No. US 2025/0342218) teaches prompt generation component, through contextual data accessing component, may provide contextual information to language model in the prompt to generate the summary in order for language model to utilize contextual information to generate a summary of the snippets of information of search results based on the input search query. Data regarding the search session, such as previous search queries, search results, and/or generated summaries, may be provided to the language model via prompt generation component through contextual data accessing component to provide additional context when generating the summary. Roche et al. (Pub. No. US 2015/0074565) teaches a user interface (UI) may be configured to display the data access steps needed to access the additional data, and may further display the additional data once it is received alongside the existing query results. The UI can display a prompt for authentication credentials or other identifiers that would allow the user access to the additional data elements. If subsequent authentication, workflow or other steps are needed, those may also be displayed in the UI. The UI may be generated and/or updated dynamically on-demand. Thus, if a user has entered a certain query, and has received certain results, the UI may be dynamically generated to show those results. The dynamically generated UI may further generate prompts, buttons or other means for allowing the user to provide inputs in order to access any additional data elements indicated in the query results. Contact Information Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Son Hoang whose telephone number is (571) 270-1752. The Examiner can normally be reached on Monday – Friday (7:00 AM – 4:00 PM). If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Sherief Badawi can be reached on (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SON T HOANG/Primary Examiner, Art Unit 2169 December 27, 2025
Read full office action

Prosecution Timeline

Dec 05, 2024
Application Filed
Dec 26, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591561
ACCESSING A PRIMARY CLUSTERY KEY INDEX STRUCTURE DURING QUERY EXECUTION
2y 5m to grant Granted Mar 31, 2026
Patent 12566762
Space Efficient Technique For Estimating Cardinality Using Probabilistic Data Structure
2y 5m to grant Granted Mar 03, 2026
Patent 12561337
SYSTEM AND METHOD FOR PATENT AND PRIOR ART ANALYSIS
2y 5m to grant Granted Feb 24, 2026
Patent 12554720
PREDICATE TRANSFER PRE-FILTERING ON MULTI-JOIN QUERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12554766
ACCESS POINTS FOR MAPS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+35.0%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 905 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month