Prosecution Insights
Last updated: April 19, 2026
Application No. 18/496,794

Systems and Methods for Textual Classification Using Natural Language Understanding Machine Learning Models for Automating Business Processes

Final Rejection §101§103
Filed
Oct 27, 2023
Examiner
PHILLIPS, III, ALBERT M
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Auditoria.AI, Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
583 granted / 712 resolved
+26.9% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
37.4%
-2.6% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 712 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 USC 101 as being directed to an abstract idea without significantly more. Claim 1 recites: 1. A method for detecting intent of a textual message, the method comprising: receiving a request message at an integration platform system; extracting text and metadata from the request message using the integration platform system; executing using the integration platform system a plurality of semantic queries to determine an intent of the request message, by, for each semantic query: where the semantic query specifies at least one general machine learning language model to be used, and what text and metadata from the message and what textual prompt to provide to each at least one general machine learning language model, and a formatting template specifying how an expected answer from each at least one general machine learning language model should be formatted; providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query; receiving an answer from each at least general one general machine learning language model that includes an indication of an intent classification; and performing a corresponding business action in response to the indicated intent classification. Examiner finds that the emphasized portions of claim 1 recite an abstract idea—namely, mental processes. See MPEP 2106.04(a)(2)(III). When read as a whole, the recited limitations are directed to using mental processes to observe, evaluate, and make judgments about data. See id (“Accordingly, the ‘mental processes’ abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions”). Turning to each limitation individually, the element “detecting intent of a textual message” merely requires observation and evaluation of a textual message and a judgment as to the intent of the message. The element “extracting text and metadata from the request message” merely requires observation and evaluation of the message and a judgment as to what constitutes text and metadata. The element “to determine an intent of the request message, by, for each semantic query” merely requires observation and evaluation of a textual message and a judgment as to the intent of the message. The element “where the semantic query specifies at least one general machine learning language model to be used, and what text and metadata from the message and what textual prompt to provide to each at least one general machine learning language model,” merely requires a judgment and/or opinion on the part of a human and be accomplished with the aid of pen and paper. The element “and a formatting template specifying how an expected answer from each at least one general machine learning language model should be formatted” merely requires human judgment as to the model’s expected answer judgment as to how to format the expected answer. This task can be practically performed in the human mind with the aid of pen and paper. The element “performing a corresponding business action in response to the indicated intent classification” is not limited to actions that cannot be practically performed in the human mind with or without the aid of pen and paper. Turning to the additional elements, “A method …the method comprising” recites mere instructions to implement the abstract on a computer and thus fails to integrate the exception. See MPEP 2106.05(f). The additional element “receiving a request message at an integration platform system” recites insignificant extra solution activity in the form of mere data gathering. See MPEP 2106.05(g). As such, it does not integrate the exception. The additional element “executing using the integration platform system a plurality of semantic queries” uses a computer as a tool to perform an existing process and thus recites mere instructions to apply the exception and thus does not integrate the exception. See MPEP 2106.05(f). The language “using the integration platform system” generally links the abstract idea to a technological environment and thus does not integrate the exception. See MPEP 2106.05(h). This additional element also generally links the abstract idea to a technological environment (LLMs, for example) and thus does not integrate the exception. See MPEP 2106.05(h). The additional element “providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not provide an inventive concept. See MPEP 2106.05(h). The additional element “receiving an answer from each at least one general learning language model that includes an indication of an intent classification” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not integrate the exception. See MPEP 2106.05(h). It also recites mere instructions to apply the exception because, among other things, it recites “only the idea of a solution or outcome i.e.,[it] fails to recite details of how a solution to a problem is accomplished. . .” and “. . . invokes computers or other machinery merely as a tool to perform an existing process.” See MPEP 2106.05(f). As such, it does not integrate the exception. There is nothing that, when the additional elements are considered as an ordered combination, cause the abstract idea to be integrated into a practical application. Turning to the additional elements and inventive concept, “A method …the method comprising” recites mere instructions to implement the abstract on a computer and thus fails to provide an inventive concept. See MPEP 2106.05(f). The additional element “receiving a request message” recites “receiving or transmitting data over a network” and thus recites a well-understood, routine, and conventional computer function and thus does not recite an inventive concept. See MPEP 2106.05(d). The additional element “executing a plurality of semantic queries” uses a computer or other machinery merely as a tool to perform an existing process and thus recites mere instructions to apply the exception and thus does not provide an inventive concept. See MPEP 2106.05(f). This additional element also generally links the abstract idea to a technological environment (LLMs, for example) and thus does not provide an inventive concept. See MPEP 2106.05(h). The additional element “providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not provide an inventive concept. See MPEP 2106.05(h). The additional element “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not provide an inventive concept. See MPEP 2106.05(h). It also recites mere instructions to apply the exception because, among other things it recites “only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. . .” and “. . . invokes computers or other machinery merely as a tool to perform an existing process.” See MPEP 2106.05(f). As such, it fails to provide an inventive concept. See id. There is nothing that, when the additional elements are considered as an ordered combination, cause the claim to recite an inventive concept. There is nothing that, when the claim is considered as a whole, recites significantly more than the abstract idea. With respect to “at an integration platform system. . .” and “. . . using the integration platform system. . . “ Examiner finds these elements do nothing more than generally link the abstract idea to a technological environment or field of use (i.e. an integration platform system). As such, these additional elements fail to integrate and fail to recite an inventive concept. See MPEP 2106.05(h). Thus, for the reasons above, claim 1 recites an abstract idea without significantly more. Claim 2 recites “2. The method of claim 1 wherein the answer from the machine learning model further comprises a confidence level that the intent classification is accurate.” This element merely requires a judgment as to how to characterize the confidence level. The phrase “The method of claim 1 wherein the answer from the machine learning model further comprises” generally links the abstract idea to a technological environment and thus does not integrate the exception nor provide an inventive concept. Claim 3 recites “3. The method of claim 1 wherein the answer from the machine learning model further comprises a summary of the extracted text.” This element merely requires observation and evaluation of the text and a judgment as to what constitutes the “summary.” The phrase “The method of claim 1 wherein the answer from the machine learning model further comprises” generally links the abstract idea to a technological environment and thus does not integrate the exception nor provide an inventive concept. Claim 4 recites “4. The method of claim 1, wherein the semantic query further comprises criteria for finding and extracting an identification of a business record from the extracted text.” This element merely describes how to perform an evaluation and judgment of a business record and can be practically performed in the human mind. Claim 5 recites “5. The method of claim 4, wherein the taking an action in response to the indicated intent classification further comprises: identifying a portion of the extracted text that identifies a business record in response to the indicated intent classification and retrieving the identified business record.” This element merely requires observation and evaluation of the extracted text and a judgment as how to identify a business record. Claim 6 recites “6. The method of claim 1, where one of the at least one general machine learning language model is a large language model (LLM) and the semantic query further comprises: a prompt that includes at least one true/false (Boolean) question.” This element generally links the abstract idea to a technological environment (i.e. LLMs.). As such, it fails to integrate the exception and fails to provide an inventive concept. Claim 7 recites “7. The method of claim 1, where: one of the at least one general machine learning language model is an entailment model.” This element generally links the abstract idea to a technological environment and thus does not integrate the exception or provide an inventive concept. The element “where a premise statement is matched to an input and an associated hypothesis statement defines the intent classification.” This element generally links the abstract idea to a technological environment (i.e. LLMs) and thus fails to integrate the exception and fails to provide an inventive concept. Examiner finds hypothesis statements and intent classifications are fundamental features of training an machine learning model and thus this is why Examiner finds it merely links it to LLM technology. The element “and the semantic query further comprises at least one hypothesis statement for matching with a portion of the extracted text and an associated threshold for returning a positive match for that hypothesis statement.” This element generally links the abstract idea to a technological environment (i.e. LLMs) and thus fails to integrate the exception and fails to provide an inventive concept. Examiner finds “hypothesis statement for matching with a portion of the extracted text and an associated threshold for returning a positive match for that hypothesis statement” is fundamental feature of training an machine learning model and thus this is why Examiner finds it merely links it to LLM and machine learning technology. With respect to claim 8, the element “8. The method of claim 1, where: one of the at least one general machine learning language model is a semantic context model” generally links the abstract idea to a technological environment and thus does not integrate the exception nor provide an inventive concept. The element “where sentences that define a semantic context for a given target intent classification are represented as numerical embedding vectors in vector space to encode the semantic meaning; and the semantic query further comprises converting a portion of the extracted text to a point in vector space and calculating a distance from the point to at least one numerical embedding vector” generally links the abstract idea to a technological environment/field of use (i.e. training a machine learning model) and thus does not integrate the exception nor provide an inventive concept. With respect to claim 9, the element “9. The method of claim 8, where: calculating a distance utilizes cosine similarity as a metric for comparison” generally links the abstract idea to a technological environment/field of use (i.e. training a machine learning model) and thus does not integrate the exception nor provide an inventive concept. With respect to claim 10, the element “10. The method of claim 8, where: calculating a distance utilizes vector dot product similarity as a metric for comparison” generally links the abstract idea to a technological environment/field of use (i.e. training a machine learning model) and thus does not integrate the exception nor provide an inventive concept. Claim 11 recites: 11. A method for executing an autonomous business records data process, the method comprising: receiving a request at an integration platform system from a user account for executing a business records data process, where the business records data process comprises a state model specifying types of input data, execution tasks, and output data while maintaining a current state; creating an execution instance of the state model for the requested business records data process and allocating information fields by the integration platform system ; executing the state model of the business records data process by the integration platform system; receiving a request message while executing the state model by the integration platform system; extracting text and metadata from the request message by the integration platform system; executing using the integration platform system; a plurality of semantic queries to determine an intent of the request message, by, for each semantic query: where the semantic query specifies at least one general machine language model to be used, and what text and metadata from the message and what textual prompt to provide to each at least one general machine learning language model, and a formatting template specifying how an expected answer from each at least one general machine learning language model should be formatted; providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query; receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification; and performing a corresponding business action in response to the indicated intent classification. Examiner finds that the emphasized portions of claim 1 recite an abstract idea—namely, mental processes. See MPEP 2106.04(a)(2)(III). When read as a whole, the recited limitations are directed to using mental steps to observe, evaluate, and making judgments about data. See id (“Accordingly, the ‘mental processes’ abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions”). Turning to the elements individually, “extracting text and metadata from the request message” merely requires observation and evaluation of the message and a judgment as to what constitutes text and metadata The element “to determine an intent of the request message, by, for each semantic query” merely requires observation and evaluation of a textual message and a judgment as to the intent of the message. The element “where the semantic query specifies at least one general machine language model to be used, and what text and metadata from the message and what textual prompt to provide to each at least one general machine learning language model” merely requires a judgment and/or opinion on the part of a human and be accomplished with the aid of pen and paper. The element “and a formatting template specifying how an expected answer from each at least one general machine learning language model should be formatted” merely requires human judgment as to the model’s expected answer judgment as to how to format the expected answer. This task can be practically performed in the human mind with the aid of pen and paper. Turning to the additional elements, “11. A method for executing an autonomous business records data process, the method comprising:” recites mere instructions to implement the abstract on a computer and thus fails to integrate the exception. See MPEP 2106.05(f). The element “receiving a request from a user account for executing a business records data process, where the business records data process comprises a state model specifying types of input data, execution tasks, and output data while maintaining a current state” recites insignificant extra solution activity in the form of mere data gathering and selecting a particular data source or type of data to be manipulated and thus fails to integrate the exception. See MPEP 2106.05(g) (“obtaining information. . .requiring a request from a user. . .”). The element “creating an execution instance of the state model for the requested business records data process and allocating information fields” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not integrate the exception. The element “executing the state model of the business records data process” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not integrate the exception. The element “receiving a request message while executing the state model” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not integrate the exception. The element “executing a plurality of semantic queries” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not integrate the exception. The element “providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not integrate the exception. The element “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not integrate the exception. There is nothing that, when the additional elements are considered as an ordered combination, cause the abstract idea to be integrated into a practical application. Turning to the additional elements and whether they recite an inventive concept, “11. A method for executing an autonomous business records data process, the method comprising:” recites mere instructions to implement the abstract on a computer and thus does not recite an inventive concept. See MPEP 2106.05(f). The element “receiving a request from a user account for executing a business records data process, where the business records data process comprises a state model specifying types of input data, execution tasks, and output data while maintaining a current state” recites receiving and transmitting data over a network” and thus recites a well-understood, routine, and conventional computer function and thus fails to provide an inventive concept. See MPEP 2106.05(d). The element “creating an execution instance of the state model for the requested business records data process and allocating information fields” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not provide an inventive concept. See MPEP 2106.05(f). The element “executing the state model of the business records data process” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not provide an inventive concept. See MPEP 2106.05(f). The element “receiving a request message while executing the state model” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not recite an inventive concept. See MPEP 2106.05(f). The element “executing a plurality of semantic queries” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not recite an inventive concept. See MPEP 2106.05(f). The element “providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not recite an inventive concept. See MPEP 2106.05(h). The element “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification” recites mere instructions to apply the exception and also generally links the abstract idea to a technological field. As such, it does not provide an inventive concept. With respect to “at an integration platform system. . . by the integration platform system. . .” and “. . . using the integration platform system. . . “ Examiner finds these elements do nothing more than generally link the abstract idea to a technological environment or field of use (i.e. an integration platform system). As such, these additional elements fail to integrate and fail to recite an inventive concept. See MPEP 2106.05(h). Claim 12 recites “12. The method of claim 11, further comprising retrieving information to fill the information fields from one or more business records databases.” This additional element recites insignificant extra solution activity in the form of mere data gathering. See MPEP 2106.05(g). As such, it does not integrate the exception. This element recites “receiving or transmitting data over a network” and thus recites a well-understood, routine, and conventional computer function and thus does not recite an inventive concept. See MPEP 2106.05(d). Claim 13 recites “13. The method of claim 12, wherein information fields include a client identification.” This element generally links the abstract idea to technological environment (databases, for example) and thus does not integrate the exception or provide an inventive concept. Claim 14 recites “14. The method of claim 11, wherein performing a corresponding business action comprises constructing and sending an email requesting additional information for at least one of the information fields.” Construction of an email can be practically performed in the human mind with the aid of pen and paper. Sending an email requesting additional information for at least one of the information fields recites insignificant extra solution activity in the form of mere data gathering. As such, if fails to integrate the exception. Sending an email requesting additional information for at least one of the information fields recites insignificant extra solution activity in the form of mere data gathering. As such, if fails to integrate the exception. Sending an email requesting additional information for at least one of the information fields recites receiving or transmitting data over a network” and thus recites a well-understood, routine, and conventional computer function and thus does not recite an inventive concept. See MPEP 2106.05(d). Claim 15 recites “15. The method of claim 11, wherein performing a corresponding business action comprises obtaining and storing additional information for at least one of the information fields and storing the information to a business records database” recites insignificant extra solution activity in the form of mere data gathering. As such, if fails to integrate the exception. This claim also recites “Storing and retrieving information in memory” and thus fails to recite an inventive concept. See MPEP 2106.05(d). Claim 16 recites “16. The method of claim 11, wherein the intent classification is inquiring status of a payment, and performing a corresponding business action comprises check whether the user account has authorization, querying a business records database for an invoice number, and providing to the user account information indicative of the processing status of the invoice number.” This element recites insignificant extra solution activity in form of “mere data gathering” and “selecting a particular data source or type of data to be manipulated” and thus fails to integrate the exception. See MPEP 2106.05(g) (“Obtaining information about transactions. . . Consulting and updating an activity log. . . selecting information, based on types of information of availability of information. . . for collection analysis and display.”). This element recites storing and retrieving information in memory and thus fails to provide an inventive concept. See MPEP 2106.05(d). Claim 17 recites “17. The method of claim 11, wherein the intent classification is updating a vendor record and performing a corresponding business action comprises receiving and storing additional information related to the vendor record in a business records database.” This element recites insignificant extra solution activity in form of “mere data gathering and “selecting a particular data source or type of data to be manipulated” and thus fails to integrate the exception. See MPEP 2106.05(g) (“Obtaining information about transactions. . . Consulting and updating an activity log. . . selecting information, based on types of information of availability of information. . . for collection analysis and display.”). This element recites storing and retrieving information in memory and thus fails to provide an inventive concept. See MPEP 2106.05(d). Claim 18 recites “18. The method of claim 11, wherein the intent classification is receiving a new invoice and performing a corresponding business action comprises detecting the request message includes a new invoice, processing the invoice, and updating a business records database to include the new invoice.” This element recites insignificant extra solution activity in form of “mere data gathering and “selecting a particular data source or type of data to be manipulated” and thus fails to integrate the exception. See MPEP 2106.05(g) (“Obtaining information about transactions. . . Consulting and updating an activity log. . . selecting information, based on types of information of availability of information. . . for collection analysis and display.”). This element recites storing and retrieving information in memory and thus fails to provide an inventive concept. See MPEP 2106.05(d). Claim 19 recites “19. The method of claim 11, wherein the intent classification is requesting copy of a document, and performing a corresponding business action comprises identifying the requested document, retrieving the requested document from a business records database, and providing a copy of the requested document in response to the request message.” This element recites insignificant extra solution activity in form of “mere data gathering and “selecting a particular data source or type of data to be manipulated” and thus fails to integrate the exception. See MPEP 2106.05(g) (“Obtaining information about transactions. . . Consulting and updating an activity log. . . selecting information, based on types of information of availability of information. . . for collection analysis and display.”). This element recites storing and retrieving information in memory and thus fails to provide an inventive concept. See MPEP 2106.05(d). Claim 20 recites: 20. A method for detecting intent of a textual message, the method comprising: receiving a request message; extracting text and metadata from the request message; executing a plurality of semantic queries to determine an intent of the request message, by, for each semantic query: where the semantic query specifies a large language model (LLM) machine learning language model to be used, what text and metadata from the message to provide to the machine learning language model, a textual prompt to provide to the machine learning language model that includes at least one true/false (Boolean) question, a formatting template specifying how an expected answer from the machine learning language model should be formatted, and criteria for finding and extracting an identification of a business record from the extracted text; providing at least some of the extracted text and metadata and a textual prompt to each machine learning language model as specified in the semantic query; receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification and a confidence level that the intent classification is accurate; and performing a corresponding business action in response to the indicated intent classification. Examiner finds that the emphasized portions of claim 20 recite an abstract idea—namely, mental processes. See MPEP 2106.04(a)(2)(III). When read as a whole, the recited limitations are directed to using mental steps to observe, evaluate, and make judgments about data. See id (“Accordingly, the ‘mental processes’ abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions”). Turning to each limitation individually, the element “detecting intent of a textual message” merely requires observation and evaluation of a textual message and a judgment as to the intent of the message. The element “extracting text and metadata from the request message” merely requires observation and evaluation of the message and a judgment as to what constitutes text and metadata. The element “to determine an intent of the request message, by, for each semantic query” merely requires observation and evaluation of a textual message and a judgment as to the intent of the message. The element “where the semantic query specifies a large language model (LLM) machine learning language model to be used, what text and metadata from the message to provide to the machine learning language model, a textual prompt to provide to the machine learning language model that includes at least one true/false (Boolean) question” merely requires a judgment and/or opinion on the part of a human and can be accomplished with the aid of pen and paper. The element “a formatting template specifying how an expected answer from the machine learning language model should be formatted, and criteria for finding and extracting an identification of a business record from the extracted text” merely requires a judgment and/or opinion on the part of a human and can be accomplished with the aid of pen and paper. The element “performing a corresponding business action in response to the indicated intent classification” is not limited to actions that cannot be practically performed in the human mind with or without the aid of pen and paper. Turning to the additional elements and whether they integrate the exception, the element “A method …the method comprising” recites mere instructions to implement the abstract on a computer and thus fails to integrate the exception. See MPEP 2106.05(f). The additional element “providing at least some of the extracted text and metadata and a textual prompt to each machine learning language model as specified in the semantic query;” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not integrate the exception. See MPEP 2106.05(h). The additional element “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification and a confidence level that the intent classification is accurate” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not integrate the exception. See MPEP 2106.05(h). It also recites mere instructions to apply the exception because, among other things it recites “only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. . .” and “. . . invokes computers or other machinery merely as a tool to perform an existing process.” See MPEP 2106.05(f). As such, it does not integrate the exception. Turning to the additional elements and whether they provide an inventive concept, the element “A method …the method comprising” recites mere instructions to implement the abstract on a computer and thus fails provide an inventive concept. See MPEP 2106.05(f). The additional element “providing at least some of the extracted text and metadata and a textual prompt to each machine learning language model as specified in the semantic query;” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not provide an inventive concept. See MPEP 2106.05(h). The additional element “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification and a confidence level that the intent classification is accurate” generally links the abstract idea to a technological environment (LLMs, for example) and thus does not integrate the exception. See MPEP 2106.05(h). It also recites mere instructions to apply the exception because, among other things it recites “only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. . .” and “. . . invokes computers or other machinery merely as a tool to perform an existing process.” See MPEP 2106.05(f). As such, it does not provide an inventive concept. With respect to “at an integration platform system . . .” and “. . . using the integration platform system. . . “ Examiner finds these elements do nothing more than generally link the abstract idea to a technological environment or field of use (i.e. an integration platform system). As such, these additional elements fail to integrate and fail to recite an inventive concept. See MPEP 2106.05(h). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. ‘ Claim(s) 1, 4-5, 11-15, 17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bobbarjung US 11,394,667 in view of Shanmugam US 20190311036 A1. With respect to claim 1, Bobbarjung (US 11,394,667) teaches “A method for detecting intent of a textual message the method comprising: receiving a request message” in col. 5:35-43 (“An intent identification module 226 determines an intent associated with, for example, a received message. . .”); “extracting text” in col. 5:15-20 (Examiner finds processing text includes extracting text); “and metadata from the request message” in col. 6:63-col. 7:10 (entity and attributes are metadata); col. 13:29-36; col. 13:45-65 (metadata such as city name and state code extracted); “executing a plurality of semantic queries to determine an intent of the request message, by, for each semantic query: where the semantic query specifies at least one general machine learning language model to be used” col. 1in col. 16:41-77 (each model (custom chatbot) is configured with a set of intents; Examiner finds the custom chatbot teaches a general machine learning language model because it is not trained on domain specific data such as customer data; see Applicant’s specification at para. 44); “and what text and metadata from the message and what textual prompt to provide to each at least one general machine learning language model” in col. 16:41-47 (each custom chatbot (model) recognizes particular text and metadata—see col. 17:14-30; metadata includes a specific type of store, for example—“CoolKidsClothes”—see col. 17:52-65; textual prompt is “CoolKidsClothes”—see col. 18:54-67); “and a formatting template specifying how an expected answer from each at least one general machine learning language model should be formatted” in col. 8:62-65; col. 18:19-31 (chatbot skill is the formatting template that specifies how an expected answer should be formatted from the custom chatbot—see col. 18:65-col. 19:4) “providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query” in col. 18:54-67 (“Where is the nearest CoolKidsClothes store” is provided to custom model (custom chatbot); this includes the prompt, “CoolKidsClothes”); “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification” in col. 11:24-50 (“matching intent will be added to the result. . .”; proxy score is intent classification; intent classification also performed via machine learning); “and performing a corresponding business action in response to the indicated intent classification” 18:65-col. 19:4 (business action is store location, for example; business actions also include the following: col. 6:19-25 (contact representative); col. 8:36-56 (render information, render data, sending email, etc.). It appears Bobbarjung fails to explicitly teach “at an integration platform system” and “using the integration platform system.” However, Shanmugam teaches “at an integration platform system” and “using the integration platform system” in Fig. 1 item 112 and para. 33. Shanmugam and Bobbarjung are analogous art because they are from the same field of endeavor as the claimed invention. It would have been obvious to one skilled in the art before the effective filing date of the invention to modify “receiving a request message” in Bobbarjung to include “at an integration platform system” as taught by Shanmugam to modify “extracting text and metadata from the request message” to include “using the integration platform system” as taught by Shanmugam; to and to modify “executing” in Bobbarjung to include “using the integration platform system.” The motivation would have been to “provide . . .continuous self-improvement to reduce fallout and improve accuracy, and also provide efficient allocation and hand-off of tasking between chatbots and live agents.” Shanmugam at para. 6. With respect to claim 2, Bobbarjung teaches “The method of claim 1 wherein the answer from the machine learning model further comprises a confidence level that the intent classification is accurate” in col. 11:35-44 (. “. . .reasonable confidence that this match is good quality. . ”). With respect to claim 4, Bobbarjung teaches “4. The method of claim 1, wherein the semantic query further comprises criteria for finding and extracting an identification of a business record from the extracted text” in col. 17:23-30 (Examiner finds customer database objects are business records). With respect to claim 5, Bobbarjung teaches “5. The method of claim 4, wherein the taking an action in response to the indicated intent classification further comprises: identifying a portion of the extracted text that identifies a business record in response to the indicated intent classification and retrieving the identified business record” in col. 17:23-30 (Examiner finds customer database objects are business records). With respect to claim 11 Bobbarjung teaches “11. A method for executing an autonomous business records data process” in col. 5:35-43 (“An intent identification module 226 determines an intent associated with, for example, a received message. . .”); “the method comprising: receiving a request from a client device for executing a business records data process” ” in col. 5:35-43 (“An intent identification module 226 determines an intent associated with, for example, a received message. . .”); col. 18:65-col. 19:4 (business record process is finding a store location, for example; business record data processes also include the following: col. 6:19-25 (contact representative); col. 8:36-56 (render information, render data, sending email, etc.); “where the business records data process comprises a state model specifying types of input data, execution tasks, and output data while maintaining a current state” in Fig. 6 (state model is Bot name, for example); Fig. 7 (input data includes “Greetings” such as hi hello and hola; output data includes “Hello stranger how can I help” and “What is your favorite color”); col. 7:54-58; “creating an execution instance of the state model for the requested business records data process and allocating information fields” Figs. 6 and 7 (all the fields in Figs. 6-7 are information fields; at least Figs. 6-7 the creation of a bot (instance of state model); col. 7:54-63 (information fields include intents, for example); “executing the state model of the business records data process” col. 7:54-63 (deploying teaches executing); “receiving a request message while executing the state model” in col. 5:35-43 (“An intent identification module 226 determines an intent associated with, for example, a received message. . .”); col. 7:54-63 (deploying teaches executing); (Figs. 7-9 in lower right hand corner give examples of execution of a bot and receiving a message); “extracting text and metadata from the request message” in col. 5:15-20 (Examiner finds processing text includes extracting text); ” col. 6:63-col. 7:10 (entity and attributes are metadata); col. 13:29-36; col. 13:45-65 (metadata such as city name and state code extracted); “executing a plurality of semantic queries to determine an intent of the request message, by, for each semantic query: where the semantic query specifies at least one general machine language model to be used” in col. 16:41-77 (each model (custom chatbot) is configured with a set of intents; custom chatbots teach general machine language model—see analysis in claim 1 above); “and what text and metadata from the message and what textual prompt to provide to each at least one general machine learning language model,” in col. 16:41-47 (each custom chatbot (model) recognizes particular text and metadata—see col. 17:14-30; metadata includes a specific type of store, for example—“CoolKidsClothes”—see col. 17:52-65; textual prompt is “CoolKidsClothes”—see col. 18:54-67); “and a formatting template specifying how an expected answer from each at least one general machine learning language model should be formatted” in col. 18:19-31 (chatbot skill is the formatting template that specifies how an expected answer should be formatted from the custom chatbot—see col. 18:65-col. 19:4); “providing at least some of the extracted text and metadata and a textual prompt to each at least one general machine learning language model as specified in the semantic query” in col. 18:54-67 (“Where is the nearest CoolKidsClothes store” is provided to custom model (custom chatbot); this includes the prompt, “CoolKidsClothes”); “receiving an answer from each at least one general machine learning language model that includes an indication of an intent classification; in col. 11:24-50 (“matching intent will be added to the result. . .”; proxy score is intent classification; intent classification also performed via machine learning); “and performing a corresponding business action in response to the indicated intent classification” in col. 18:65-col. 19:4 (business action is store location, for example; business actions also include the following: col. 6:19-25 (contact representative); col. 8:36-56 (render information, render data, sending email, etc.). It appears Bobbarjung fails to explicitly teach “at an integration platform system” and “by the integration platform system” and “using the integration platform system.” However, Shanmugam teaches “at an integration platform system” and “by the integration platform system” and “using the integration platform system” in Fig. 1 item 112 and para. 33. Shanmugam and Bobbarjung are analogous art because they are from the same field of endeavor as the claimed invention. It would have been obvious to one skilled in the art before the effective filing date of the invention to modify “receiving a request” and Bobbarjung to include “at an integration platform system” as taught by Shanmugam; to modify “allocating information fields” and “executing the state model. . . .process” and “receiving a request message while executing the state model” and “extracting text and metadata from the request message” in Bobbarjung to include “by the integration platform system” as taught by Shanmugam; and to and to modify “executing” in Bobbarjung to include “using the integration platform system” as taught by Shanmugam. The motivation would have been to “provide . . .continuous self-improvement to reduce fallout and improve accuracy, and also provide efficient allocation and hand-off of tasking between chatbots and live agents.” Shanmugam at para. 6. With respect to claim 12, Bobbarjung teaches “12. The method of claim 11, further comprising retrieving information to fill the information fields from one or more business records databases” in col. 13:29-41 (business records include products); col. 17:23-41 (data sources such as customer databases are business records databases; information is returned and filled in “filtered” ); col. 18:39-40. With respect to claim 13, Bobbarjung teaches “13. The method of claim 12, wherein information fields include a client identification” in col. 8:51-52. With respect to claim 14, Bobbarjung teaches “14. The method of claim 11, wherein performing a corresponding business action comprises constructing and sending an email requesting additional information for at least one of the information fields” in col. 8:55-56. With respect to claim 15, Bobbarjung teaches “15. The method of claim 11, wherein performing a corresponding business action comprises obtaining and storing additional information for at least one of the information fi3elds and storing the information to a business records database” in col. 8:51-55 (knowledge bases include business records database—see col. 17:23-30). With respect to claim 17, Bobbarjung teaches “17. The method of claim 11, wherein the intent classification is updating and performing a corresponding business action comprises receiving and storing additional information related to the vendor record in a business records database” in col. 19:5-13 (restaurant record is vendor record; additional information about restaurant is the additional information received and stored; reservation system is the business database that stores the data; reservation system is updated with new reservation once customer completes reservation). With respect to claim 19, Bobbarjung teaches “19. The method of claim 11, wherein the intent classification is requesting copy of a document” in col. 17:23-30 (Examiner finds customer database objects are documents; requesting said objects must involve at least a copy in the computer’s memory); col. 15:5-10 (request a document requires the document to be copied into computer memory); “retrieving the requested document from a business records database” in col. 17:23-30 (Examiner finds customer database is business records database); col. 15:5-10 (Examiner finds knowledge base is a business records database); “and prov
Read full office action

Prosecution Timeline

Oct 27, 2023
Application Filed
Dec 13, 2023
Response after Non-Final Action
Feb 11, 2025
Non-Final Rejection — §101, §103
Jul 14, 2025
Response Filed
Oct 09, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596919
NEURAL NETWORK ACCELERATOR WITH A CONFIGURABLE PIPELINE
2y 5m to grant Granted Apr 07, 2026
Patent 12585918
ML MODEL DRIFT DETECTION USING MODIFIED GAN
2y 5m to grant Granted Mar 24, 2026
Patent 12585646
INFORMATION PROVISION DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579154
SYSTEM AND METHOD OF INFORMATION EXTRACTION, SEARCH AND SUMMARIZATION FOR SERVICE ACTION RECOMMENDATION
2y 5m to grant Granted Mar 17, 2026
Patent 12572810
System and Method For Generating Improved Prescriptors
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
95%
With Interview (+12.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 712 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month