Prosecution Insights
Last updated: April 19, 2026
Application No. 18/828,733

CONTEXT FOR LANGUAGE MODELS

Final Rejection §103
Filed
Sep 09, 2024
Examiner
MAMILLAPALLI, PAVAN
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Elasticsearch B V
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
597 granted / 743 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
21 currently pending
Career history
764
Total Applications
across all art units

Statute-Specific Performance

§101
24.1%
-15.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 743 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to Applicant’s amendments and arguments submitted on August 26, 2025 for Application # 18/828,733 filed on September 09, 2024 in which claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for priority under 35 U.S.C. 119 (a)-(d). The Provisional Application has been filed in Application No. 63/581,195, filed on September 07, 2023. Status of claims Claims 1-3 and 5-21 are pending, of which claims 1-3 and 5-21 are rejected under 35 U.S.C. 103. Claims 1, 3, 11 and 17 are amended. Claim 4 is canceled. Claim 21 is newly added. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 5-21 are rejected under 35 U.S.C. 103 as being unpatentable over Swaminathan Chandrasekaran US 2024/0320476 A1 (hereinafter ‘Chandrasekaran’) in view of Luzhnica et al. US 11,516,158 B1 (hereinafter ‘Luzhnica’) as applied, and further in view of Parker Hill US 2023/0244678 A1 (hereinafter ‘Hill’). As per claim 1, Chandrasekaran disclose, A method comprising: retrieving, from a context datastore, context data that is responsive to a user prompt (Chandrasekaran: paragraph 0011: disclose receiving the input prompts ‘user prompt’ based on an analysis of the prompt attributes and identifies one or more relevant concepts ‘context’ in the ontology model); transmitting an augmented prompt to a language model (Chandrasekaran: paragraph 0012: disclose applying the ontology prompt to generative language model), the augmented prompt including the user prompt and the textual representation of the context data (Chandrasekaran: paragraph 0012: disclose applying to the ontology prompt ‘prompt with input prompt and concept’ and paragraph 0052: disclose associated textual information ‘representation’ as part of prompt enrichment); receiving, from the language model (Chandrasekaran: paragraph 0012: disclose applying the ontology prompt to generative language model), a model response with textual data that responds to the user prompt, the textual data being generated by the language model using the context data (Chandrasekaran: paragraph 0012: disclose converting input prompts and one or more of the plurality of enriched prompts by applying generative language model); computing a plurality of attribute scores about the context data based on at least one of the context data or the model response (Chandrasekaran: paragraph 0013: disclose generating an output similarity ‘attribute’ score and also truthfulness score and paragraph 0014: disclose polarity score and toxicity score); computing a significance of context value about an effectiveness of the context data for generating the model response based on the plurality of attribute scores (Chandrasekaran: paragraph 0013: disclose generating ‘computing’ an aggregated truthfulness ‘significance’ score based on the truthfulness score generated by the prompt language filtering unit); and executing a computer action in response to the significance of context value not satisfying a threshold level (Chandrasekaran: paragraph 0075: disclose filtering out ‘computer action’ enriched prompts with low trustfulness score ‘not satisfying a threshold level’ due to issue like copyright violations, plagiarism and the presence of propaganda, toxicity or inappropriate polarity. Examiner would also discuss about action to filtering prompt in secondary art below). It is noted, however, Chandrasekaran did not specifically detail the aspects of computer action in response to the significance of context value not satisfying a threshold level as recited in claim 1. On the other hand, Luzhnica achieved the aforementioned limitations by providing mechanisms of computer action in response to the significance of context value not satisfying a threshold level (Luzhnica: Col 79 Lines 40-42: disclose take action in filtering messages determined to exhibit low semantic similarity to instructional prompts). The motivation for doing so would have been to generate linguistically complex messages efficiently and effectively (Luzhnica: Col 1 Lines 8-10). It is noted, however, neither Chandrasekaran nor Luzhnica specifically detail the aspects of converting the context data from a vector representation into a textual representation using an embedding model as recited in claim 1. On the other hand, Hill achieved the aforementioned limitations by providing mechanisms of converting the context data from a vector representation into a textual representation using an embedding model (Hill: paragraph 0040: disclose each of the one or more distinct spans of tokens may be converted to embeddings or numerical representations of the spans of tokens, which may preferably be used to predict either or both of a categorical parameter value and a sub-categorical parameter value of a context nexus. Examiner argues that the prior art teaching can be implement to convert vector representation into a textual representation). The motivation for doing so would have been to create new and useful systems and methods for implementing machine learning-based context generation and intelligent context-informed query search and response (Hill: paragraph 0005). Chandrasekaran, Hill and Luzhnica are analogous art because they are from the “same field of endeavor” and both from the same “problem-solving area”. Namely, they are both from the field of “Enhancing Prompt Systems”. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems of Chandrasekaran, Hill and Luzhnica because they are both directed to enhancing prompt systems and both are from the same field of endeavor. The skilled person would therefore regard it as a normal option to include the restriction features of Luzhnica and Hill with the method described by Chandrasekaran in order to solve the problem posed. Therefore, it would have been obvious to combine Luzhnica and Hill with Chandrasekaran to obtain the invention as specified in instant claim 1. As per claim 2, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein the plurality of attribute scores include two or more of a relevance score representing a level of semantic similarity between the user prompt and the context data (Chandrasekaran: paragraph 0016: disclose best match score comparing input and enriched prompt attributes), a timeliness score representing a level of recentness of the context data, a continuity score represent a level of continuity of the context data, and an accuracy level representing a level of accuracy of the model response (Chandrasekaran: paragraph 0061: disclose generating different scores based on the context for an enriched prompt, which examiner argues that the prior art teaches generating scores and the teaching can be applied to generate scores in the limitation). As per claim 3, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein computing the significance of context value includes: applying a sigmoid function to a weighted combination of the plurality of attribute scores (Chandrasekaran: paragraph 0066: disclose adjusting scoring weights using the output scores ‘algorithm’). It is noted, however, Chandrasekaran did not specifically detail the aspects of applying a sigmoid function as recited in claim 3. On the other hand, Luzhnica achieved the aforementioned limitations by providing mechanisms of applying a sigmoid function (Luzhnica: Col 67 Line 6-7: disclose the function is applied to Neural Networks. Examiner points that the sigmoid function is just an application, therefore the teaching of a function can be substitute by sigmoid function). As per claim 5, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein the plurality of attribute scores include a relevance score, wherein executing the computer action includes: determining that the relevance score does not achieve a threshold level or is a lowest among the plurality of attribute scores (Chandrasekaran: paragraph 0075: disclose filtering out ‘computer action’ enriched prompts with low trustfulness score ‘not satisfying a threshold level’ due to issue like copyright violations, plagiarism and the presence of propaganda, toxicity or inappropriate polarity); generating a revised query based on a template or a query domain-specific language (Chandrasekaran: paragraph 0040: disclose domain-specific knowledge assistants); and retrieving the context data using the revised query (Chandrasekaran: paragraph 0012: disclose converting input prompts and one or more of the plurality of enriched prompts by applying generative language model). As per claim 6, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein the context data includes search results retrieved from the context datastore, wherein the plurality of attribute scores include a relevance score, wherein executing the computer action includes: determining that the relevance score does not achieve a threshold level or is a lowest among the plurality of attribute scores (Chandrasekaran: paragraph 0075: disclose filtering out ‘computer action’ enriched prompts with low trustfulness score ‘not satisfying a threshold level’ due to issue like copyright violations, plagiarism and the presence of propaganda, toxicity or inappropriate polarity); generating ranked search results by ranking the search results (Chandrasekaran: paragraph 0073: disclose rank a plurality of the enriched prompts); and including the ranked search results in the augmented prompt (Chandrasekaran: paragraph 0073: disclose rank a plurality of the enriched prompts). As per claim 7, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein the plurality of attribute scores include a timeliness score, wherein executing the computer action includes: determining that the timeliness score does not achieve a threshold level or is a lowest among the plurality of attribute scores (Chandrasekaran: paragraph 0075: disclose filtering out ‘computer action’ enriched prompts with low trustfulness score ‘not satisfying a threshold level’ due to issue like copyright violations, plagiarism and the presence of propaganda, toxicity or inappropriate polarity); and updating an index structure of the context datastore (Chandrasekaran: paragraph 0047: disclose automatically update the ontology models). As per claim 8, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein the context data includes a first context data portion and a second context data portion, wherein the plurality of attribute scores include a continuity score, wherein executing the computer action includes: determining that the continuity score does not achieve a threshold level or is a lowest among the plurality of attribute scores (Chandrasekaran: paragraph 0075: disclose filtering out ‘computer action’ enriched prompts with low trustfulness score ‘not satisfying a threshold level’ due to issue like copyright violations, plagiarism and the presence of propaganda, toxicity or inappropriate polarity); generating, by an inference model, a third context data portion based on the first context data portion and the second context data portion; and including the first context data portion, the second context data portion, and the third context data portion in the augmented prompt (Chandrasekaran: paragraph 0034: disclose machine learning model to draw inferences from patterns in the data). As per claim 9, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Chandrasekaran did not specifically detail the aspects of wherein executing the computer action includes: determining that an iteration count exceeds a threshold level, the iteration count representing a number of regenerations associated with the user prompt; and updating an algorithm that is used to compute the significance of context value as recited in claim 9. On the other hand, Luzhnica achieved the aforementioned limitations by providing mechanisms of wherein executing the computer action includes: determining that an iteration count exceeds a threshold level, the iteration count representing a number of regenerations associated with the user prompt (Luzhnica: Col 81 Lines 4-7: disclose sufficient number of iterations on the content of messages generated by the system); and updating an algorithm that is used to compute the significance of context value (Luzhnica: Col 69 Lines 35-43: disclose evolve ‘update’ with training/learning that occurs through use that increases the overall training set). As per claim 10, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Chandrasekaran did not specifically detail the aspects of wherein executing the computer action includes: retrieving feedback data provided by a user with respect to the model response; and updating an algorithm that is used to compute the significance of context value based on the feedback data as recited in claim 10. On the other hand, Luzhnica achieved the aforementioned limitations by providing mechanisms of wherein executing the computer action includes: retrieving feedback data provided by a user with respect to the model response (Luzhnica: Col 101 Line 65 – Col 102 Line 2: disclose automatically apply a context feedback); and updating an algorithm that is used to compute the significance of context value based on the feedback data (Luzhnica: Luzhnica: Col 69 Lines 35-43: disclose evolve ‘update’ with training/learning that occurs through use that increases the overall training set). As per claim 11, Chandrasekaran disclose, An apparatus comprising (Chandrasekaran: paragraph 0039: disclose electronic device can include computers): at least one processor (Chandrasekaran: paragraph 0039: disclose processors); and a non-transitory computer-readable medium storing executable instructions that when executed by the at least one processor cause the at least one processor to execute operations, the operations comprising (Chandrasekaran: paragraph 0087: disclose non-transitory computer-readable storage medium): remaining limitations in this claim 11 are similar to the limitations in claims 1 and 2. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claims 1 and 2. As per claim 12, limitations of this claim are similar to claims 3 and 4. Therefore, examiner rejects claim 12 limitations under the same rationale as claims 3 and 4. As per claim 13, limitations of this claim are similar to claim 2. Therefore, examiner rejects claim 13 limitations under the same rationale as claim 2. As per claim 14, most of the limitations of this claim have been noted in the rejection of claim 11 above. In addition, Chandrasekaran disclose, wherein the operations further comprise: computing the timeliness score based on a temporal distance between a first timestamp associated with the user prompt and a second timestamp associated with the context data (Chandrasekaran: paragraph 0061: disclose generating different scores based on the context for an enriched prompt, which examiner argues that the prior art teaches generating scores and the teaching can be applied to generate scores in the limitation). As per claim 15, most of the limitations of this claim have been noted in the rejection of claim 11 above. In addition, Chandrasekaran disclose, computing the continuity score (Chandrasekaran: paragraph 0016: disclose best match score comparing input and enriched prompt attributes), including: generating a first embedding vector representing a first portion of the context data (Chandrasekaran: paragraph 0061: disclose generating different scores based on the context for an enriched prompt, which examiner argues that the prior art teaches generating scores and the teaching can be applied to generate scores in the limitation); generating a second embedding vector representing a second portion of the context data; and computing a similarity between the first embedding vector and the second embedding vector (Chandrasekaran: paragraph 0042: disclose domain embeddings can refer to vector representations of specific knowledge domains or subject areas). As per claim 16, most of the limitations of this claim have been noted in the rejection of claim 11 above. In addition, Chandrasekaran disclose, computing the accuracy score based on an iteration count achieving a threshold level (Chandrasekaran: paragraph 0061: disclose generating different scores based on the context for an enriched prompt, which examiner argues that the prior art teaches generating scores and the teaching can be applied to generate scores in the limitation). It is noted, however, Chandrasekaran did not specifically detail the aspects of , the iteration count representing a number of regenerations associated with the user prompt as recited in claim 16. On the other hand, Luzhnica achieved the aforementioned limitations by providing mechanisms of , the iteration count representing a number of regenerations associated with the user prompt (Luzhnica: Col 81 Lines 4-7: disclose sufficient number of iterations on the content of messages generated by the system). As per claim 17, Chandrasekaran disclose, A non-transitory computer-readable medium storing executable instructions that cause at least one processor to execute operations, the operations comprising (Chandrasekaran: paragraph 0087: disclose non-transitory computer-readable storage medium): remaining limitations in this claim 17 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1. As per claim 18, limitations of this claim are similar to claims 5 and 6. Therefore, examiner rejects claim 18 limitations under the same rationale as claims 5 and 6. As per claim 19, limitations of this claim are similar to claim 7. Therefore, examiner rejects claim 19 limitations under the same rationale as claim 7. As per claim 20, limitations of this claim are similar to claim 8. Therefore, examiner rejects claim 20 limitations under the same rationale as claim 8. As per claim 21, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Chandrasekaran disclose, wherein executing the computer action includes: determining that at least one of the attribute scores does not satisfy a threshold condition; generating a revised query based on a domain-specific query language; and retrieving revised context data using the revised query (Chandrasekaran: paragraph 0046: disclose adding or modifying one or more attributes associated with the prompts and the prompt enrichment unit can be configured to enrich the prompt by adding additional context, information, or details to the prompts to make the prompt more specific, relevant, and/or useful for generating a higher-quality response from a generative language model. Examiner argues that the prior art teaching can implement the cited limitation as the context data is enrich because the current query does not meet the threshold). Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pub. US 2024/0249077 A1 disclose “SYSTEMS AND METHODS FOR IN-CONTEXT LEARNING USING SMALL-SCALE LANGUAGE MODELS” US Pub. US 2024/0220727 A1 disclose “INFORMATION EXTRACTION AND IMAGE RE-ORDERING USING PROMPT LEARNING AND MACHINE-READING COMPREHENSION” Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAVAN MAMILLAPALLI whose telephone number is (571)270-3836. The examiner can normally be reached on M-F. 8am - 4pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached on (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAVAN MAMILLAPALLI/ Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Sep 09, 2024
Application Filed
May 28, 2025
Non-Final Rejection — §103
Jul 28, 2025
Interview Requested
Aug 05, 2025
Applicant Interview (Telephonic)
Aug 09, 2025
Examiner Interview Summary
Aug 26, 2025
Response Filed
Dec 16, 2025
Final Rejection — §103
Mar 24, 2026
Interview Requested
Mar 30, 2026
Applicant Interview (Telephonic)
Apr 03, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602389
RECOMMENDATION WORD DETERMINATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603155
METHODS FOR COMPRESSION OF MOLECULAR TAGGED NUCLEIC ACID SEQUENCE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12601597
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12602503
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591580
CONFIDENCE FABRIC ENHANCED PRIVACY-PRESERVING DATA AGGREGATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 743 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month