Prosecution Insights
Last updated: April 19, 2026
Application No. 18/406,782

Computer Program and Method for Annotated Document Processing Based on User-Defined Parameters

Non-Final OA §101§102§103§112
Filed
Jan 08, 2024
Examiner
ADESANYA, OLUJIMI A
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Deep Water Point & Associates
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
430 granted / 655 resolved
+3.6% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
35 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of query analysis without significantly more. The claims 1 and 14 recite steps of identifying the document, the taxonomy, and a question to be processed by a large language model (LLM) (i.e., a data analysis/evaluation step), applying at least one taxonomy augmented generation (TAG) tactic from a set of TAG tactics (i.e., a data analysis/evaluation step), generating an input prompt configured to be received as an input for the LLM, wherein the input prompt includes a document context derived from the document, a taxonomy context derived from the taxonomy, and the question to be processed by the LLM (i.e., a data analysis/evaluation step), providing the input prompt to the LLM (i.e., a data analysis/evaluation step), and receiving a response generated by the LLM (i.e., a post solutional activity step step), corresponding to steps achievable by a human mentally/manually in analyzing gathered queries, performing analysis of the queries and providing a response to the queries, and as such, the mental processes category of abstract ideas. This judicial exception is not integrated into a practical application because the claims are directed to an abstract idea with additional generic computer elements, where the generically recited computer elements (LLM, computer-implemented method, computer program product, medium, processor) do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because steps “providing the input prompt to the LLM” and “receiving a response generated by the LLM” correspond to the well-understood, routine, conventional computer functions of “gathering and analyzing information using conventional techniques and displaying the result” and “collecting information, analyzing it, and displaying certain results of the collection and analysis” as recognized by the court decisions listed in MPEP § 2106.05 and as provided by cited references Mukherjee and Krishnan (see PTO 892 form) The dependent claims are rejected based on their dependency. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 15-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 5-12 (all dependent from claim 1) recites the limitations “the attribute trimming”, “the hierarchical diving”, “the hierarchy flattening”, “the singular item focusing” and “the borrowing alignment”. There is insufficient antecedent basis for “the attribute trimming”, “the hierarchical diving”, “the hierarchy flattening”, “the singular item focusing” and “the borrowing alignment” in the claims 5-12. There is also insufficient antecedent basis for “the hierarchical dive” in claim 11. Claims 5-12 are interpreted as dependent from claim 2. The “the hierarchical dive” as recited in claim 11 is interpreted as “the hierarchical diving”. Appropriate correction is required Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 1. Claims 1-4, 8 and 10-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mukherjee et al US 2024/0354436 A1 (“Mukherjee”) Per claim 1, Mukherjee discloses a computer-implemented method for identifying one or more matching elements from a taxonomy present in a document, the method comprising: identifying the document, the taxonomy, and a question to be processed by a large language model (LLM) (para. [0006]; the system may search and identify (e.g., through a document search model) relevant data permissioned to a user in response to receiving a user query from the user …, para. [0036]; For example, the system may chunk documents into a plurality of words, sentences, paragraphs, and/or the like. The text chunks (e.g., the plurality of portions of the set of documents) may be stored in an ontology, or based on an ontology, which may define document/data types and associated properties, and relationships among documents/data types, properties, and/or the like …, para. [0037]; para. [0038]; para. [0043]-[0044]); applying at least one taxonomy augmented generation (TAG) tactic from a set of TAG tactics (para. [0052]; para. [0085]; generating a prompt for the LLM 130 based on the context in addition to the user input and portions of the set of documents similar to the user input can assist the LLM 130 …, para. [0092]; the prompt generation module 114 may condense the context when generating the prompt for the LLM 130 …, para. [0093]-[0095]; para. [0099], condensing/reducing size of context/ontology as TAG tactic); generating an input prompt configured to be received as an input for the LLM, wherein the input prompt includes a document context derived from the document, a taxonomy context derived from the taxonomy, and the question to be processed by the LLM (para. [0037]; para. [0041]; para. [0043]-[0044]; generating a prompt for the LLM 130 based on the context in addition to the user input and portions of the set of documents similar to the user input can assist the LLM 130 … To utilize context associated with the user input to generate the prompt for the LLM 130, the context module 112 may capture … an ontology stored in the database module 108, para. [0092], prompt for the LLM as including context (i.e., ontology), the user input (i.e., question) and portions of the set of documents (i.e., document context)); providing the input prompt to the LLM (The system may then transmit the prompt to the LLM ..., para. [0044]; para. [0092]); and receiving a response generated by the LLM (The system may then transmit the prompt to the LLM for the LLM to generate an output…., para. [0044]; para. [0092]) Per claim 2, Mukherjee discloses the computer-implemented method of claim 1, wherein the set of TAG tactics includes: attribute trimming (the prompt generation module 114 may condense the context when generating the prompt for the LLM 130. Specifically, the prompt generation module 114 may condense the context and/or the prompt such that a size of the prompt generated by the prompt generation module 114 for the LLM 130 does not exceed or overflow a size limit on the prompt for the LLM 130…, para. [0093]), hierarchical diving (fig. 2A; para. [0094]-[0095]; para. [0099]), hierarchy flattening (para. [0052]; The document search module 106 may further vectorize the text chunks to generate a plurality of vectors, where each of the plurality of vectors corresponds to a chunked portion/segment (e.g., a word, a sentence, a paragraph, or the like) of the set of documents. Each text chunk and vector may be associated with a reference identification number (ID) and each text chunk and vector as well as an associated reference ID may be stored in the ontology of the document search system 102, where the ontology may be within the database module 108…., para. [0085]), singular item focusing (the system may query the ontology using the reference IDs of the vectors corresponding to the n most similar portions of the set of documents returned by the similarity search to retrieve/obtain portions of the set of documents similar to the user query.…, para. [0043]; para. [0092]), and borrowing alignment (The ontology may include stored information providing a data model for storage of data in the database. The ontology may be defined by one or more object types …, para. [0052]; identify ontology 205 that the LLM 130 may traverse in fulfilling a user input 240, para. [0099]). Per claim 3, Mukherjee discloses the computer-implemented method of claim 1, wherein each tactic of the set of TAG tactics is configured to reduce the size of an input prompt for the LLM (the context module 112 may capture … an ontology stored in the database module 108, para. [0092]; the prompt generation module 114 may condense the context when generating the prompt for the LLM 130…., para. [0093]) Per claim 4, Mukherjee discloses the computer-implemented method of claim 1, wherein the total combined size of the input prompt is less than the size of a maximum token constraint for the LLM (para. [0041]; para. [0044]; para. [0093]). Per claim 8, Mukherjee discloses the computer-implemented method of claim 1, wherein applying the hierarchy flattening TAG tactic assigns an ID-tag that retains hierarchy information for each taxonomy item included in the taxonomy context (para. [0052]; para. [0085]). Per claim 10, Mukherjee discloses the computer-implemented method of claim 1, wherein applying the borrowing alignment TAG tactic includes: instructing the LLM to act as an expert for a specific domain of knowledge for which the LLM has previously received training, wherein the specific domain of knowledge encompasses one or more scopes of the taxonomy associated with the question (para. [0052]); determining whether or not the question for the LLM requires returning any taxonomy-related items (para. [0079]; para. [0097]-[0099]); in response to determining that taxonomy-related items are required, extracting the taxonomy-related items from the response for each item (para. [0097]-[0099]); performing a semantic search of the taxonomy for each item (para. [0097]-[0099]); and mapping the taxonomy-related items extracted from the response to pre-existing knowledge in the LLM obtained from previous training (para. [0097]). Per claim 11, Mukherjee discloses the computer-implemented method of claim 1, wherein the attribute trimming TAG tactic and the hierarchical dive TAG tactic can both be applied to the document context before the input prompt is generated (fig. 2A; the system may condense the context when generating the prompt for the LLM. Specifically, the system may condense the context and/or the prompt such that a size of the prompt generated by the system for the LLM does not exceed or overflow a size limit on the prompt for the LLM…, para. [0047]; para. [0094]-[0095]; para. [0099]). Per claim 12, Mukherjee discloses the computer-implemented method of claim 1, wherein the borrowing alignment TAG tactic cannot be applied in conjunction with any of the other TAG tactics (para. [0054]; An LLM may be of any type, including a Question Answer (“QA”) LLM that may be optimized for generating answers from a context, a multimodal LLM/model, and/or the like …, para. [0060]; the LLM may be fine-tuned or trained on appropriate training data (e.g., annotated data showing correct or incorrect pairings of sample natural language queries and responses). After receiving a user input from the user 150, the document search system 102 may generate and provide a prompt to a LLM 130a, which may include one or more large language models trained to fulfill a modeling objective, such as question and answer …, para. [0079]). Per claim 13, Mukherjee discloses the computer-implemented method of claim 1, wherein the input prompt is provided to the LLM via at least one application programming interface (API) (para. [0054]). Per claim 14, Mukherjee discloses the computer-implemented method of claim 1, wherein the taxonomy, the question, and any other instructions provided to the LLM are customizable by an end-user (para. [0041]; para. [0044]). Per claim 15, Mukherjee discloses a computer program product residing on a non-transitory computer-readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising: identifying the document, the taxonomy, and a question to be processed by a large language model (LLM) (para. [0006]; the system may search and identify (e.g., through a document search model) relevant data permissioned to a user in response to receiving a user query from the user …, para. [0036]; For example, the system may chunk documents into a plurality of words, sentences, paragraphs, and/or the like. The text chunks (e.g., the plurality of portions of the set of documents) may be stored in an ontology, or based on an ontology, which may define document/data types and associated properties, and relationships among documents/data types, properties, and/or the like …, para. [0037]; para. [0038]; para. [0043]-[0044]); applying at least one taxonomy augmented generation (TAG) tactic from a set of TAG tactics (para. [0052]; para. [0085]; generating a prompt for the LLM 130 based on the context in addition to the user input and portions of the set of documents similar to the user input can assist the LLM 130 …, para. [0092]; the prompt generation module 114 may condense the context when generating the prompt for the LLM 130 …, para. [0093]-[0095]; para. [0099], condensing/reducing size of context/ontology as TAG tactic); generating an input prompt configured to be received as an input for the LLM, wherein the input prompt includes a document context derived from the document, a taxonomy context derived from the taxonomy, and the question to be processed by the LLM (para. [0037]; para. [0041]; para. [0043]-[0044]; generating a prompt for the LLM 130 based on the context in addition to the user input and portions of the set of documents similar to the user input can assist the LLM 130 … To utilize context associated with the user input to generate the prompt for the LLM 130, the context module 112 may capture … an ontology stored in the database module 108, para. [0092], prompt for the LLM as including context (i.e., ontology), the user input (i.e., question) and portions of the set of documents (i.e., document context)); providing the input prompt to the LLM (The system may then transmit the prompt to the LLM ..., para. [0044] ; para. [0092]); and receiving a response generated by the LLM (The system may then transmit the prompt to the LLM for the LLM to generate an output…., para. [0044] ; para. [0092]). Per claim 16, Mukherjee discloses the computer program product of claim 15, wherein the set of TAG tactics includes: attribute trimming (the prompt generation module 114 may condense the context when generating the prompt for the LLM 130. Specifically, the prompt generation module 114 may condense the context and/or the prompt such that a size of the prompt generated by the prompt generation module 114 for the LLM 130 does not exceed or overflow a size limit on the prompt for the LLM 130…, para. [0093]), hierarchical diving (fig. 2A; para. [0094]-[0095]; para. [0099]), hierarchy flattening (para. [0052]; The document search module 106 may further vectorize the text chunks to generate a plurality of vectors, where each of the plurality of vectors corresponds to a chunked portion/segment (e.g., a word, a sentence, a paragraph, or the like) of the set of documents. Each text chunk and vector may be associated with a reference identification number (ID) and each text chunk and vector as well as an associated reference ID may be stored in the ontology of the document search system 102, where the ontology may be within the database module 108…., para. [0085]), singular item focusing (the system may query the ontology using the reference IDs of the vectors corresponding to the n most similar portions of the set of documents returned by the similarity search to retrieve/obtain portions of the set of documents similar to the user query. …, para. [0043]), and borrowing alignment (The ontology may include stored information providing a data model for storage of data in the database. The ontology may be defined by one or more object types …, para. [0052]; identify ontology 205 that the LLM 130 may traverse in fulfilling a user input 240, para. [0099]). Per claim 17, Mukherjee discloses the computer program product of claim 15, wherein each tactic of the set of TAG tactics is configured to reduce the size of an input prompt for the LLM (the context module 112 may capture … an ontology stored in the database module 108, para. [0092]; the prompt generation module 114 may condense the context when generating the prompt for the LLM 130…., para. [0093]). Per claim 18, Mukherjee discloses the computer program product of claim 15, wherein the total combined size of the input prompt is less than the size of a maximum token constraint for the LLM (para. [0041]; para. [0044]; para. [0093]). Per claim 19, Mukherjee discloses the computer program product of claim 15, wherein the taxonomy, the question, and any other instructions provided to the LLM are customizable by an end-user (para. [0041]; para. [0044]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 2. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee in view of Krishnan et al US 2025/0005050 A1 (“Krishnan”) Per claim 5, Mukherjee discloses the computer-implemented method of claim 1, Mukherjee does not explicitly disclose wherein applying the attribute trimming TAG tactic removes one or more user-designated attributes from the taxonomy context before the input prompt is generated However, this feature is taught by Krishnan (Examples of contextual resources shown in FIG. 1A include entity graph 103, knowledge graph 105 …, para. [0055]; one or more constraints, such as a specific limit on the length of the search query to be generated, specific filters and/or facets to include in or exclude from the generated search query, specific terms and/or operators that the large language model should include in the search query, one or more constraints on the amount of information used to generate the search query (e.g., “extract at least 10 specific terms from the context data …, para. [0101]; para. [0238]). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Krishnan with the method of Mukherjee in arriving at the missing features of Mukherjee, because such combination would have resulted in avoiding AI hallucination while achieving operational efficiencies (Krishnan, para. [0035]). 3. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee in view of Shen et al US 2025/0150476 A1 (“Shen”) Per claim 9, Mukherjee discloses the computer-implemented method of claim 1, wherein the singular item focusing TAG tactic further includes: identifying N distinct taxonomy elements within the taxonomy context (the system may query the ontology using the reference IDs of the vectors corresponding to the n most similar portions of the set of documents returned by the similarity search to retrieve/obtain portions of the set of documents similar to the user query. …, para. [0043]); generating a corresponding input prompt for each of the N-identified taxonomy elements (generating a prompt for the LLM 130 based on the context in addition to the user input and portions of the set of documents similar to the user input can assist the LLM 130 in generating output …, para. [0092]); Mukherjee does not explicitly disclose providing each of the N-generated input prompts into the LLM one at a time, receiving N-distinct responses generated by the LLM one at a time or merging all N-distinct responses into a final result However, these features are taught by Shen: providing each of the N-generated input prompts into the LLM one at a time (fig. 6, element 606, 612; para. [0056]); receiving N-distinct responses generated by the LLM one at a time (fig. 6, element 606, 614); and merging all N-distinct responses into a final result (fig. 6, element 606, 614) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Shen with the method of Mukherjee in arriving at the missing features of Mukherjee, because such combination would have resulted in improving accuracy and reliability of entity classification (Shen, para. [0022]). Allowable Subject Matter Claims 6 and 7 (claim 6 interpreted as dependent from claim 2) are objected to as being dependent upon a rejected base claim, but would be allowable (pending Applicant addressing the 35 U.S.C. 101 rejection) if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Jan 08, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591739
METHOD AND SYSTEM FOR DIACRITIZING ARABIC TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12585686
EVENT DETECTION AND CLASSIFICATION METHOD, APPARATUS, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12585481
METHOD AND ELECTRONIC DEVICE FOR PERFORMING TRANSLATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578779
Multiple Stage Network Microphone Device with Reduced Power Consumption and Processing Load
2y 5m to grant Granted Mar 17, 2026
Patent 12579181
Synchronization of Sensor Network with Organization Ontology Hierarchy
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+25.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month