Prosecution Insights
Last updated: April 19, 2026
Application No. 18/987,120

Method and System for an Intelligent Search Engine

Final Rejection §101
Filed
Dec 19, 2024
Examiner
MOSER, BRUCE M
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Elevance Health Inc.
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
631 granted / 745 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
47 currently pending
Career history
792
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 745 resolved cases

Office Action

§101
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action In amendments dated 1/21/26, Applicant amended claims 1, 7, 9 and 16, canceled no claims, and added no new claims. Claims 1-20 are presented for examination. Rejections under 35 U.S.C. 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to mental processes without significantly more. Independent claims 1, 7, and 14 each recites identifying an intent associated with the user input, using a machine-learning trained natural language processing model on the user input, and wherein the trained natural language processing model further considers at least a portion of prior user inputs of the user to the search interface as an input used for identifying the intent; identifying a domain associated with the user input based on the intent by: traversing a graph data structure comprising nodes representing domains and edges representing relationships between the domains, wherein each domain represents a category of business-specific data selected from patient records, insurance claims, and business metrics, and selecting a domain node from the graph data structure based on the identified intent; expanding abbreviations used in the user input by replacing the abbreviations with their corresponding definitions from the accessed set of abbreviation definitions; identifying at least two keywords contained in the user input based on the expanded abbreviations; searching the graph data structure using an A* search algorithm to: identify nodes containing potential answers associated with the selected domain node and the at least two keywords, and determine a shortest path to each identified answer node; and generating a natural language summary that combines the potential answers identified from the answer nodes into a single response. Identifying an intent is evaluating and a mental process, and identifying using a machine-learning trained natural language processing model is merely applying said model and is a mental process per Recentive Analytics v. Fox Broadcasting Corp. (134 F.4th 1205, 2025 U.S.P.Q.2d 628). Identifying a domain, selecting a domain node, identifying at least two keywords, and identifying nodes are each evaluating and are mental processes. Traversing and searching a graph data structure, expanding abbreviations in user input, determining a shortest path to each identified node, and generating a natural language summary are each recited broadly and are mental processes accomplishable in the human mind or on paper. Each claim recites additional elements of hosting a user search interface for a user over a computer network; and receiving the user input via the user search interface, wherein the user input contains a natural language request for information, which are each input steps and insignificant extra-solution activity; accessing, from a database storing domain-specific abbreviation mappings, a set of abbreviation definitions associated with the selected domain node; and updating the graph data structure in response to detecting changes in stored relationships between domain nodes, abbreviation mappings, and answer nodes, which are each insignificant extra-solution activity; and outputting the natural language summary to the search interface for display to the user, which is an output step and also insignificant extra-solution activity. Claim 7 recites a computer readable medium and claim 14 recites a non-transitory memory and a processor communicatively coupled to the non-transitory memory, which are each generic components of a computer. Examiner notes specification paragraph 0003 states “a difficult part of running a chatbot is the interpretation of a user's intent based on the utterance,” “another difficult part of running a chatbot is programming the artificial intelligence to choose an appropriate response to the utterance based on the intent,” and also “[for] a chatbot serving a broad population, with a broad base of areas where users may be expected to inquire, it may be difficult to make accurate predictions and expectations of the user's intent, and/or to locate the correct answer a wide ranging database of answers.” Paragraph 0003 also states “a chatbot that can predict an intent more accurately, and base keyword searching on intent, may be desirable.” and “a chatbot that is aware of who the user is and/or the user's previous activity using the chatbot may also be desirable.” Examiner did not find any other paragraphs in the specification that described drawbacks in the art or how the invention may address said drawbacks. Examiner notes the claims do not predict any intents and that the claim steps do not recite a particular improvement in any technology or function of a computer per MPEP 2106.04(d) and do not recite any unconventional steps in the invention per MPEP 2106.05(a). Therefore, the recited mental processes are not integrated into a practical application. Taking the claims as a whole, hosting a user search interface, receiving user input, and outputting a natural language summary are each recited broadly and amount to sending and receiving data across a network per specification paragraph 0022 and figure 1 network 111, which are routine and conventional activities per the list of such activities in MPEP 2106.05(d) part II. Accessing abbreviation mappings form a database and updating the graph data structure are retrieving and storing data from a memory which are also routine and conventional activities per the list of such activities in MPEP 2106.05(d) part II. The computer readable medium, non-transitory memory, and a processor communicatively coupled to the non-transitory memory are each still generic components of a computer. Thus the claims do not include additional elements that are sufficient to amount to significantly more than the recited mental processes. Claims 2, 10, and 17 each recites wherein the step of receiving the user input includes using the trained natural language processing model to perform an automatic completion of the user input before the user has finished entering the user input (using a trained natural language processing model is a mental process per Recentive Analytics v. Fox Broadcasting Corp. (134 F.4th 1205, 2025 U.S.P.Q.2d 628) and autocompleting text is recited broadly and is a mental process accomplishable in the human mind or on paper). Claims 3, 11, and 18 each recites wherein the step of receiving the user input includes using the trained natural language processing model to generate one or more recommendations for user input based on having received a portion of the user input (using a trained natural language processing model is a mental process per Recentive Analytics v. Fox Broadcasting Corp. (134 F.4th 1205, 2025 U.S.P.Q.2d 628) and generating recommendations is recited broadly and is a mental process accomplishable in the human mind or on paper). Claims 4, 12, and 19 each recites saving the user input, associated with data relating to the user, in a user response database, wherein the trained natural language processing model queries the user response database as a part of the step of considering at least a portion of the user’s prior user inputs to the search interface (saving data to a memory is routine and conventional activity per the list of such activities in MPEP 2106.05(d) part II). Claims 5, 13, and 20 each recites requesting a second input from the user to indicate whether the natural language summary provided a satisfactory response to the user input (requesting input is a prompt for information per specification paragraph 0057 and is recited broadly and amounts to sending and receiving data across a network per specification paragraph 0022 and figure 1 network 111, which are routine and conventional activities per the list of such activities in MPEP 2106.05(d) part II). Claims 6, 8, and 15 each recites wherein the trained natural language processing model comprises a transformer model (applying a trained natural language processing model is a mental process per Recentive Analytics v. Fox Broadcasting Corp. (134 F.4th 1205, 2025 U.S.P.Q.2d 628). Claims 9 and 16 each recites wherein the step of identifying potential answers that are associated with the key word comprises identifying potential answers that are associated with both of the at least two key words (identifying potential answers is recited broadly and is a mental process accomplishable in the human mind or on paper). Relevant Prior Art During his search for prior art, Examiner found the following references to be relevant to Applicant's claimed invention. Each reference is listed on the Notice of References form included in this office action: Chamua et al (US 20240311437) teaches providing autosuggestions to multi-intent search queries, does not teach a graph structure of domains, identifying a domain associated with user input by traversing a graph structure, searching a graph structure using an A* search algorithm, and generating a natural language summary of potential answers from nodes (paragraphs 0016, 0055-0064 figure 5); and Sadikov, Eldar et al (“Clustering Query Refinements by User Intent”) teaches determining user intent in a search query by clustering query refinements into related queries, using those related queries to place more frequently used related queries on a search page to gauge user intent, teaches using a Markov graph for determining the query refinement clusters but does not teach a graph structure of domains, identifying a domain associated with user input by traversing a graph structure, searching a graph structure using an A* search algorithm, and generating a natural language summary of potential answers from nodes (section 1 Introduction, 2-3 pages 841-846). Responses to Applicant’s Remarks Regarding objections to claims 9 and 16 for antecedent basis of “identifying at least two keywords,” in view of amendments deleting this language from these claims, these objections are withdrawn. Regarding rejections of claims 6, 8, and 15 under 35 U.S.C. 112(d) for the duplicate language “wherein the trained natural language processing model comprises a transformer model” also in independent claim 1, in view of amendments removing said language from claim 1, these rejections are withdrawn. Examiner acknowledges the rejections to claims 8 and 15 were improper as said claim language is not recited in independent claims 7 and 14. Regarding rejections to claims 7-13 under 35 U.S.C. 101 for reciting possibly non-statutory subject matter of a computer readable medium, in view of amendments to claim 7 reciting a non-transitory computer readable medium, these rejections are withdrawn. Regarding rejections to claims 1-20 under 35 U.S.C. 101 for reciting mental processes without significantly more, Applicant’s arguments have been considered but are not persuasive. On page 12 of his Remarks Applicant discusses Prong 1 and asserts claims 1-20 are within the four categories of patentable subject matter and Examiner agrees. On pages 12-13 of his Remarks Applicant discusses Step 2A Prong One of the Eligibility Analysis in MPEP 2106.03 through 2106.05 and asserts the limitation “identifying an intent associated with the user input, using a machine-learning trained natural language processing model on the user input, and wherein the trained natural language processing model further considers at least a portion of prior user inputs of the user to the search interface as an input used for identifying the intent” does not contain limitations that can practically be performed in the human mind. Applicant points out that specification 0038 describes a domain, “as that term is used herein, may relate to a category or subcategory of business specific data, such as patient records, insurance claims, internal business metrics, or the like,” which is not the recited user input. Examiner notes the preceding limitation to identifying an intent recites receiving a user input that contains a natural language request for information. Examiner also notes MPEP 2106.04(a)(2)(III) states "The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation." This “identifying” limitation merely applies a machine-learning model which, while said machine-learning model also considers prior user inputs as input along with the received user input, is a mental process per Recentive Analytics v. Fox Broadcasting Corp. as noted in the rejection above. This “identifying” limitation uses a computer as a tool and the BRI of this limitation includes using judgement to determine an intent from the considered user input, which is a mental process. As for the USPTO Memorandum "Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101," Examiner notes the “identifying an intent” imitation does not encompass AI in a way other than simply applying it and thus can be a mental process as described above. On page 13 Applicant asserts that the "traversing a graph data structure…;” “expanding abbreviations in the user input…;” “determin[ing] a shortest path to each identified answer node;” and “generating a natural language summary" are also limitations that cannot practically be performed in the human mind, or on paper. Examiner disagrees and notes each of these limitations are recited broadly and use a computer as a tool, and that the BRI of each limitation includes either a form of judgement or evaluation (traversing a graph data structure which is data, determining a shortest path) or a physical aid such as a pen and paper (expanding abbreviations, generating a summary) and thus each of these limitations may also be mental processes accomplishable in the human mind or on paper. On pages 14-15 of his Remarks Applicant discusses Step 2A Prong 2 and asserts the “identifying an intent” limitation recites an improvement to the difficulties discussed in specification paragraph 0003 (“interpretation of the user’s intent based on the utterance,” “programming artificial intelligence to choose an appropriate response to the utterance,” “make accurate predictions and expectations of the user’s intent”). Examiner disagrees and notes identifying an intent using a machine-learning model is exactly one of the described difficulties (“interpretation of the user’s intent based on the utterance,”) and, in addition to merely applying a machine-learning model not being an improvement, does not reflect an improvement on said difficulty. Regarding specification paragraph 0040 describing how the combination or identifying an intent and identifying a domain represent an improvement, Examiner notes paragraph 0040 actually says the improvement is in “associating a query with a domain, with domain specific abbreviations, allows intent recognition model 218 and keyword module 220 to include expanded definitions of abbreviations that may be used in the query, which may be specific to the identified domain. This is an improvement over known search engines …” The claims broadly recite identifying a domain “associated with the user input” and “based on the identified intent,” which is not specifically associating a query with a domain and not reciting details showing how a domain is identified using an intent. While the recited database storing domain-specific mappings, and the selected domain node only “based on” the identified intent, accessing abbreviation definitions that are only “associated with” a selected domain node is also recited broadly and are not specific to an identified domain. On page 16 Applicant discusses Step 2B and asserts “amended claim 1 recites elements that improve on conventional information processing systems and interpersonal game play.” Examiner found no reference to interpersonal game play in the specification and, per MPEP 2106.05(a) (“An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim,”), Examiner notes the above assertion was not accompanied by a showing of details in claims of an improvement and supported by the specification. Examiner believes the combination of additional elements (hosting a search user interface, receiving the user input via the search user interface, accessing abbreviation definitions from a database, updating a graph data structure in response to detecting changes in relationships between nodes, and outputting a natural language summary) and mental processes (identifying an intent using a machine-learning model, identifying a domain be traversing a graph data structure and selecting a domain node, expanding abbreviations in the user input, identifying at east two keywords in the user input, searching the graph data structure, identifying nodes, determining a shortest path to each identified node, generating a natural language summary) is conventional and does not recite an improvement or an inventive concept. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUCE M MOSER whose telephone number is (571)270-1718. The examiner can normally be reached M-F 9a-5p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at 571 270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRUCE M MOSER/Primary Examiner, Art Unit 2154 3/26/26
Read full office action

Prosecution Timeline

Dec 19, 2024
Application Filed
Oct 15, 2025
Non-Final Rejection — §101
Jan 21, 2026
Response Filed
Mar 26, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602403
SCALABLE PARALLEL CONSTRUCTION OF BOUNDING VOLUME HIERARCHIES
2y 5m to grant Granted Apr 14, 2026
Patent 12585717
System and Method for Recommending Users Based on Shared Digital Experiences
2y 5m to grant Granted Mar 24, 2026
Patent 12579198
TEXT STRING COMPARISON FOR DUPLICATE OR NEAR-DUPLICATE TEXT DOCUMENTS IDENTIFIED USING AUTOMATED NEAR-DUPLICATE DETECTION FOR TEXT DOCUMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12554783
USING DISCOVERED UNIFORM RESOURCE IDENTIFIER INFORMATION TO PERFORM EXPLOITATION TESTING
2y 5m to grant Granted Feb 17, 2026
Patent 12530419
DATA MANAGEMENT APPARATUS, DATA MANAGEMENT METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+20.4%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 745 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month