DETAILED ACTION
Acknowledgement
This final office action is in response to the amendment filed on 10/17/2025.
Status of Claims
Claims 8 and 18 have been canceled.
Claims 1, 3-5, 7, 11, 13-15, and 17 have been amended.
Claims 21 and 22 have been added.
Claims 1-7, 9-17, and 19-22 are now pending.
Response to Arguments
Applicant's arguments filed on 10/17/2025 regarding the 35 U.S.C. 101, 102, and 103 rejections of the claims have been fully considered. The Applicant argues the following.
(1) As per the 101 rejection, the Applicant argues, in summary, that (i) the claims recite operations that do not fall within any of enumerated groupings of abstract ideas. The "performing a graph search over the one or more data sources, and wherein a data source of the one or more data sources comprises a knowledge graphs...and displaying on an electronic display...." limitations recite operations that the human mind is not equipped to perform; (ii) the claims integrate any alleged abstract idea into a practical application. The additional elements identified at Step 2A prong one, in combination with the alleged abstract idea(s) provide a solution to a technical problem encountered by document management and retrieval systems. The claims recite a method and system that improves upon conventional document management systems; and (iii) claims 1 and 11 recite significantly more than the alleged abstract idea. The Applicant asserts that at least the "large language model, performing a graph search over the one or more data sources" and "wherein a data source of the one or more data sources comprises a knowledge graph comprising...." limitations are additional elements that do not recite well-understood, routine, or conventional activity. The Office Action does not provide a statement, court decision, or publication indicating that the limitations (e.g. "large language model" and "performing a graph search") are widely prevalent or in common use.
The Examiner respectfully disagrees with all arguments. As per argument (i), the Examiner submits that the claims are directed to the abstract group of Mental Processes and Certain Methods of Organizing Human Activity because the claims describe a process of analyzing and extracting matter information in order to generate a query (i.e. request) to search for and display relevant documents which can practically be performed in the human mind via evaluation and observation. The human mind can practically search knowledge graphs documents (e.g. org charts/ workflow charts) via observation and evaluation. Mental Processes include claims directed to collecting information, analyzing it, and displaying certain results of the collection and analysis even if they are claimed as being performed on a computer. These claims also reflect workflow instructions of fulfilling a search request for documents that can be performed by a human without a computer (i.e. certain methods of organizing human activity). Certain Methods of Organizing Human Activity can encompasses managing personal behavior (e.g. assisting in document selection/retrieval), following rules or instructions (e.g. workflow), and certain activity between a person and a computer (initiating request for documents). Per MPEP 2106.04(a), a claim recites a judicial exception when the judicial exception is “set forth” or “described” in the claim.
As per argument (ii), the Examiner submits that the additional elements recited in the claims and listed in Step 2A prong 2 do not integrate the abstract idea into a practical application because the additional elements do not improve the functioning of a computer or improve another technology or technical field. Document management and retrieval is an abstract process and the improvement an abstract process is not an improvement in technology (see MPEP 2106.05). The additional elements in the claims improves upon document retrieval to help a user to efficiently complete various tasks, which is not technical in nature. There is no direct improvement to the functioning of the computer or other technological components or systems. The claims reflect to use of specific technology (e.g. ML models and computer) to automate an abstract process of document retrieval and completion.
As per argument (iii), the Examiner submits that the additional elements recited in the claims and listed in Step 2B do not reflect significantly more based on the reasoning in Step 2A prong 2. The Examiner never stated that the additional elements were well-understood, routine, or conventional in Step 2A prong 2. Therefore, the Examiner was not required to comply with the Berkheimer Memorandum as per the 2019 PEG. Therefore, this argument is considered moot.
The 35 U.S.C. 101 rejection is maintained for the reasons presented above.
(2) As per the 102 and 103 rejections, the Applicant argues, in summary, that Claims 1 and 11, as amended, are neither anticipated by Tomasic nor rendered obvious by the combination of Tomasic and Pendyala because the references fail to teach "wherein searching the one or more data sources comprises performing a graph search over the one or more data sources, and wherein a data source of the one or more data sources comprises a knowledge graph comprising, documents, people and matters".
The Examiner finds the Applicant’s arguments persuasive. Therefore, the previous 102 and 103 rejections have been withdrawn. However, upon further search and consideration, a new ground of 103 rejection for claims 1 and 11 is made. See details below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7, 9-17, and 19-22 are rejected under 35 U.S.C. 101 because the claimed invention, “Systems and Methods for Automatic Electronic Document Search and Recommendation”, is directed to an abstract idea, specifically Mental Processes and Certain Methods of Organizing Human Activity, without significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination provide mere instructions to implement the abstract idea on a computer.
Step 1: Claims 1-7, 9-17, and 19-22 are directed to a statutory category, namely a process (claims 1-7, 9-10, and 21) and a machine (claims 11-17, 19-20, and 22).
Step 2A (1): Independent claims 1 and 11 are directed to an abstract idea of Mental Processes and Certain Methods of Organizing Human Activity, based on the following claim limitations: “receiving/receive matter information relating to the matter; extracting/extract…key phrases from the matter information; generating/generate a query from the key phrases; searching/search one or more data sources for… documents using the query, wherein searching the one or more data sources comprises performing a graph search over the one or more data sources, and wherein a data source of the one or more data sources comprises a knowledge graph comprising documents, people, and matters; and displaying/display,…, one or more…documents relevant to the query.”. These claims describe a process of analyzing and extracting matter information in order to generate a query (i.e. request) to search for and display relevant documents, which can be practically performed in the human mind with pen and paper (i.e. mental processes). These claims also reflect workflow instructions of fulfilling a search request for documents that can be performed by a human without a computer (i.e. certain methods of organizing human activity). Dependent claims 2-3, 6-7, 9-10, 12-13, 16-17, and 19-22 further describe the analysis and extraction of matter information for the searching, displaying, and transmitting of relevant forms/documents to a user with the limitations of “wherein the matter information is provided in a matter intake form (claims 2 and 12); wherein the matter intake form is automatically generated by: receiving input data from one or more sources, the input data relating to a matter; classifying,…, a matter classification based at least in part on the input data; selecting one or more matter forms based on the matter classification; generating,…, field-field data pairs based on the input data and the one or more matter forms; and automatically populating fields of the one or more matter forms using the field-field data pairs (claims 3 and 13); further comprising converting/convert the key phrases into one or more vectors, and the searching the one or more data sources comprises finding similarity between the one or more vectors of the key phrases with vectors of the electronic documents stored in the one or more data sources (claims 6 and 16); further comprising classifying/classify, by a document classifier, the matter information into one or more classifications, wherein the query is based at least in part on the one or more classifications (claims 7 and 17); further comprising displaying/display,… a matter title field and a matter description field both operable to receive the matter information from a user (claim 9 and 19); further comprising retrieving/retrieve historical user data, wherein the query is based at least in part on the historical user data (claims 10 and 20); further comprising transmitting/transmit the intake matter form to an entity configured to autonomously perform a task according to the intake matter form (claims 21 and 22). Therefore, these limitations, under the broadest reasonable interpretation, fall within the abstract groupings of Mental Processes which include concepts performed in the human mind such as observations, evaluations, judgments, and opinions and Certain Methods of Organizing Human Activity which encompasses managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. Mental Processes include claims directed to collecting information, analyzing it, and displaying certain results of the collection and analysis even if they are claimed as being performed on a computer. The courts have found claims requiring a generic computer or nominally reciting a generic computer may still recite a mental process even though the claim limitations are not performed entirely in the human mind. Certain Methods of Organizing Human Activity can encompass the activity of a single person (e.g. a person following a set of instructions), activity that involve multiple people (e.g. a commercial interaction), and certain activity between a person and a computer (e.g. a method of anonymous loan shopping). Therefore, claims 1-7, 9-17, and 19-22 are directed to an abstract idea and are not patent eligible.
Step 2A (2): This judicial exception is not integrated into a practical application. In particular, claims 1, 3-6, 9, 11, 13-16, and 19 recite additional elements of “electronic documents; a large language model; electronic display (claims 1 and 11); a matter classifier; a second large language model (claims 3 and 13); matter classifier comprises one of a support vector machine, K-nearest neighbor classifier, Naive Bayes classifier, or a logistic regression classifier (claims 4 and 14); one or more sources comprises a chat bot interface, and wherein the chat bot interface utilizes a chat bot model trained to generate one or more questions to elicit the input data from the user (claims 5 and 15); a user interface (claim 9 and 19); a system…comprises: one or more processors; and a memory storing instructions that, when executed by the processor, configure the one or more processors to… (claim 11)”. These additional elements do not integrate the abstract idea into a practical application because the claims do not recite (a) an improvement to another technology or technical field and (b) an improvement to the functioning of the computer itself and (c) implementing the abstract idea with or by use of a particular machine, (d) effecting a particular transformation or reduction of an article, or (e) applying the judicial exception in some other meaningful way beyond generally linking the use of an abstract idea to a particular technological environment. These additional elements evaluated individually and in combination are viewed as computing components that are used to perform the abstract process of analyzing and extracting matter information in order to search for, display, and transmit relevant documents to a user. Limitations that recite mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea are not indicative of integration into a practical application (see MPEP 2106.05(f)). Therefore, claims 1-7, 9-17, and 19-22 do not include individual or a combination of additional elements that integrate the judicial exception into a practical application and thus are not patent eligible.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claims 1, 3-6, 9, 11, 13-16, and 19 recite additional elements of “electronic documents; a large language model; electronic display (claims 1 and 11); a matter classifier; a second large language model (claims 3 and 13); matter classifier comprises one of a support vector machine, K-nearest neighbor classifier, Naive Bayes classifier, or a logistic regression classifier (claims 4 and 14); one or more sources comprises a chat bot interface, and wherein the chat bot interface utilizes a chat bot model trained to generate one or more questions to elicit the input data from the user (claims 5 and 15); a user interface (claim 9 and 19); a system…comprises: one or more processors; and a memory storing instructions that, when executed by the processor, configure the one or more processors to… (claim 11)”. These additional elements evaluated individually and in combination are viewed as mere instructions to apply or implement the abstract idea on a computer. The use of trained machine learning models are considered instructions to apply or implement a model on a computer. Applying an abstract idea on a computer does not integrate a judicial exception into a practical application or provide an inventive concept (see MPEP 2106.05(f)). Therefore, claims 1-7, 9-17, and 19-22 do not include individual or a combination of additional elements that are sufficient to amount to significantly more than the judicial exception and thus are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7, 9-14, 17, and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over Tomasic et al. (US 2006/0235691 A1) in view of Crabtree et al. (US 2024/0386015 A1).
As per claims 1 and 11 (Currently Amended) A method of displaying electronic documents relevant to a matter, the method comprising (Tomasic e.g. A method for processing a request made by a requester is provided. The method may include receiving at least one request from at least one requester, the request being in the form of natural language; analyzing the request with an agent; selecting at least one form based on analyzing the request… executing at least one update based on at least one form;…(Abstract). With reference to FIG. 11, a screen display 1102 is illustrated which includes a list 1104 of different forms 1106, 1108, 1110, 1112, 1114 that potentially may address the user's 102 request (which is shown in field 1116) [0057].); and Tomasic teaches a system for recommending electronic documents comprises: one or more processors; and a memory storing instructions that, when executed by the processor, configure the one or more processors to (Tomasic e.g. FIGS. 1 through 3, exemplary process and system embodiments of the invention are provided [0032].):
Tomasic teaches receiving/receive matter information relating to the matter; (Tomasic e.g. At step 2, as shown, a user 102 communicates input data through a communication medium or media 106 to an intermediary agent 108. The input data may be supplied by the user 102 in natural language or free text form, for example, or may be in a constrained text format, such as if the intermediary agent 108 has been trained to recognize a certain format or syntax for the input data. In various embodiments of the invention, the input data supplied by the user 102 includes a request that can be addressed by a function to be performed by the intermediary agent 108. The input data may include one or more of the following kinds of communications, for example and without limitation: an e-mail, an instant message (IM), a pager message, a message recorded on a phone, a web page, a hyperlink, a short message service (SMS) message, a document, a publishing syndication field, a blog, a speech-to-text generated message, and/or any other form of communication (Figs. 1-3 and [0032].)
Tomasic teaches extracting/extract, by a machine learning model, key phrases from the matter information; (Tomasic e.g. In general, the intermediary agent 108 is configured to perform one or more functions with the input data in step 6 including… filling the form or forms with information responsive to the input data (step 6C); (Fig. 2 and [0034]). The intermediary agent 108 may also include an analysis module 108B, a planner module 108C, an extractor module 108D, and/or one or more other modules 108E that analyze the input data to select and fill an appropriate form, for example, responsive to the user 102 request, among other functions [0035]. The intermediary agent 108 may also include a repository 108F or other appropriate data storage medium for storing, among other things: rules that the agent 108 employs to analyze the input data; information extracted from forms or the input data; and/or one or more forms that have been assembled for selection by the agent 108 to respond to requests received from the user 102 (Fig. 3 and [0036]). Enhanced natural language analysis is provided by applying information extraction technology to accomplish a form of semantic parsing [0046]. The present invention may employ one or more extractor modules 108D [0049]. FIG. 12 includes a combined system architecture/process flow schematic in accordance with various embodiments of the invention [0017]. With reference to FIG. 12, an example of a process and system architecture including a machine learning computer agent or service agent assistant 1202 is shown [0102]. The agent 1202 may include multiple components for facilitating and/or processing requests 1208, such as the analysis module 1202A, the learning module 1202C, and the execution module 1202B [0109]. The analysis module 1202A may also perform extraction by using a machine-learning based extractor model that can executed for every field or control element of various forms. The analysis module 1202A may also perform entity recognition: each relation may have a designated set of key attributes, and each value generated by an extractor for these attributes may be used to attempt to look up the associated instance of the relation. A matched instance can be used as the suggested value for an associated form [0109].)
Tomasic teaches generating/generate a query from the key phrases; (Tomasic e.g. The present invention may employ one or more extractor modules 108D [0049]. For each extracted attribute value, a database can be probed to determine if that value occurs in the appropriate attribute. Probes help the classifier recognize cases where a user 102 references an attribute's actual value. These "probes" into the database may use exact matching; or, alternatively, soft matching may be employed [0050]. The analysis module 108B of the agent 108 executes a library of information extraction modules on the request and extracts data for the employee. These extractions are looked up in the database or the repository 108F to determine if they already exist [0054]. The analysis module 1202A may also perform entity recognition: each relation may have a designated set of key attributes, and each value generated by an extractor for these attributes may be used to attempt to look up the associated instance of the relation. A matched instance can be used as the suggested value for an associated form (Fig. 12 and [0109]).)
Tomasic teaches searching/search one or more data sources for electronic documents using the query,…; and (Tomasic e.g. For each extracted attribute value, a database can be probed to determine if that value occurs in the appropriate attribute. Probes help the classifier recognize cases where a user 102 references an attribute's actual value. These "probes" into the database may use exact matching; or, alternatively, soft matching may be employed [0050]. The analysis module 108B of the agent 108 executes a library of information extraction modules on the request and extracts data for the employee. These extractions are looked up in the database or the repository 108F to determine if they already exist [0054]. The analysis module 1202A may also perform entity recognition: each relation may have a designated set of key attributes, and each value generated by an extractor for these attributes may be used to attempt to look up the associated instance of the relation. A matched instance can be used as the suggested value for an associated form (Fig. 12 and [0109]).),
Tomasic teaches displaying/display, on an electronic display, one or more electronic documents relevant to the query (Tomasic e.g. Referring now to FIGS. 2 and 3, in step 8, based on the analysis performed by the intermediary agent 108, at least one form or list responsive to the input data may be presented for viewing by the user 102 [0038]. Multiple forms may be presented to the user in step 8. Such multiple forms may be prioritized in a list, for example, presented to the user 102 based on assessment and selection by the intermediary agent 108 of the form that most closely correspond to the user 102 request, the next best form that corresponds to the user 102 request, and so on [0039]. With reference to FIG. 11, a screen display 1102 is illustrated which includes a list 1104 of different forms 1106, 1108, 1110, 1112, 1114 that potentially may address the user's 102 request (which is shown in field 1116) [0057].).
Tomasic does not explicitly teach, however, Crabtree teaches the following:
Crabtree teaches extracting by a large language model (Crabtree e.g. A semantic search system integrates with an AI platform to provide advanced search capabilities by leveraging automatically generated ontologies and knowledge graphs. The system employs natural language processing, machine learning, and large language models to create, update, and align ontologies from diverse data sources (Abstract). According to a preferred embodiment, a computing system for semantic search employing an advanced reasoning platform, the computing system comprising: one or more hardware processors configured for: ...storing and managing knowledge corpora that integrates information from ontologies, semantic indices, and external sources; utilizing user context and preferences to guide semantic search and enable context-aware query interpretation and result personalization; orchestrating semantic search and reasoning workflows by integrating ontology extraction, indexing, search, knowledge graph, and context processing components; and optimizing performance and efficiency of the semantic search system based on workload characteristics and service-level objectives [0030].)
Crabtree teaches wherein searching the one or more data sources comprises performing a graph search over the one or more data sources, and wherein a data source of the one or more data sources comprises a knowledge graph comprising documents, people, and matters; (Crabtree e.g. The present invention is in the field of large-scale distributed computing, and more particularly to programmatically or declaratively constructed distributed graph-based computing platforms for artificial intelligence based search, knowledge curation, decision-making, and automation systems including those employing simulations, machine learning models, and artificial intelligence (AI) applications including large language models, generative AI, and associated AI-related services across or amongst heterogeneous cloud, large scale automation and control systems, managed data center, edge devices, and wearable/mobile devices [0026]. Accordingly, the inventor has conceived and reduced to practice, a contextual semantic search and reasoning system which integrates with an AI platform to provide advanced search capabilities by leveraging automatically generated ontologies and knowledge graphs and RAGs and knowledge graph RAGs. The system employs natural language processing, machine learning, and artificial intelligence models (e.g., large language models, diffusion models, variational autoencoders) to create, update, and align or harmonize ontologies from diverse data sources as well as maintain and select specialized narrowly tailored models, knowledge corpora, RAGs, and expert feedback for specialized queries or recommendations when prudent [0029]. According to an aspect of an embodiment, the knowledge graph supports curated symbolic representation of information which supports more complex reasoning and inference tasks [0037]. FIG. 20 is a flow diagram illustrating an exemplary method for using a distributed computation graph system for creating structured representations or knowledge graphs from various data sources and setting up a pipeline for continuous processing and monitoring of that data, according to an embodiment [0064]. The typical knowledge graph comprises nodes representing entities, concepts, and relationships, and edges representing the connections between them. The system employs various reasoning and inference techniques, such as logical reasoning, rule-based inference, and graph pattern matching, to derive new knowledge and insights from the knowledge graph [0091]. Combined with models, logic (e.g., Datalog) the knowledge graph enables complex reasoning tasks such as entity disambiguation, question answering, and recommendation [0092]. FIG. 21 is a block diagram illustrating an exemplary system architecture for a distributed, composite symbolic and non-symbolic AI platform for advanced reasoning 2120, according to an embodiment [0138]. Platform 2120 may further comprise various databases for storing a plurality of data sets, models 2127, vectors/embeddings 2128, and knowledge graphs 2129 [0139]. According to the embodiment, a knowledge graph database 2129 is present comprising symbolic facts, entities, and relations. Platform may use an ontology or schema for the knowledge graph that defines the types of entities, relationships, and attributes relevant to the given AI system's domain. Platform populates the knowledge graph with data from structured sources (e.g., databases) and unstructured sources (e.g., text documents) using information extraction techniques like named entity recognition, relation extraction, and/or co-reference resolution. Knowledge graph database 2129 may comprise a plurality of knowledge graphs, wherein knowledge graphs may be associated with a specific domain [0145].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Tomasic’s automated classification and extraction system with Crabtree’s semantic search system & AI platform that uses large language models and knowledge graphs in order to enhance the platform's overall reasoning, decision-making, and problem-solving capabilities, empowering users with intelligent and intuitive search experiences across various domains and applications (Crabtree e.g. [0029]).
As per claims 2 and 12 (Original), Tomasic in view of Crabtree teach the method of claim 1 and the system of claim 11, Tomasic teaches wherein the matter information is provided in a matter intake form (Tomasic e.g. In various embodiments of the invention, the input data supplied by the user 102 includes a request that can be addressed by a function to be performed by the intermediary agent 108. The input data may include one or more of the following kinds of communications, for example and without limitation: an e-mail, an instant message (IM), a pager message, a message recorded on a phone, a web page, a hyperlink, a short message service (SMS) message, a document, a publishing syndication field, a blog, a speech-to-text generated message, and/or any other form of communication (Figs. 1-3 and [0032].)
As per claims 3 and 13 (Currently Amended), Tomasic in view of Crabtree teach the method of claim 2 and the system of claim 12, wherein the matter intake form is automatically generated by:
Tomasic teaches receiving input data from one or more sources, the input data relating to a matter; (Tomasic e.g. At step 2, as shown, a user 102 communicates input data through a communication medium or media 106 to an intermediary agent 108. The input data may be supplied by the user 102 in natural language or free text form, for example, or may be in a constrained text format, such as if the intermediary agent 108 has been trained to recognize a certain format or syntax for the input data. In various embodiments of the invention, the input data supplied by the user 102 includes a request that can be addressed by a function to be performed by the intermediary agent 108. The input data may include one or more of the following kinds of communications, for example and without limitation: an e-mail, an instant message (IM), a pager message, a message recorded on a phone, a web page, a hyperlink, a short message service (SMS) message, a document, a publishing syndication field, a blog, a speech-to-text generated message, and/or any other form of communication (Figs. 1-3 and [0032].)
Tomasic teaches classifying, by a matter classifier, a matter classification based at least in part on the input data; (Tomasic e.g. In general, the intermediary agent 108 is configured to perform one or more functions with the input data in step 6 including, for example, analyzing the input data (step 6A) [0034]. The base workflow classification algorithm or workflow selection model (including, e.g., form selection) uses a k-way boosted decision tree classifier [0050].)
Tomasic teaches selecting one or more matter forms based on the matter classification; (Tomasic e.g. In general, the intermediary agent 108 is configured to perform one or more functions with the input data in step 6 including…selecting a form or forms responsive to the user 102 request represented by the input data (step 6B) (Fig. 2 and [0034]).)
Tomasic teaches automatically generating, by a second … model, field-field data pairs based on the input data and the one or more matter forms; and (Tomasic e.g. The planner 108C may use a backend model that consists of attribute & set of values pairs that result from extraction (that is, any extractor associated with an microform field may generate a set of extracted values from the input), classifiers that recognize references to attributes, a classifier that selects the correct workflow, and microforms with associated business logic to execute the workflow [0053].)
Tomasic teaches automatically populating fields of the one or more matter forms using the field-field data pairs. (Tomasic e.g. In general, the intermediary agent 108 is configured to perform one or more functions with the input data in step 6 including… filling the form or forms with information responsive to the input data (step 6C); (Fig. 2 and [0034]). In the embodiment of the invention shown in FIG. 2, the intermediary agent 108 may include a computer system 108A such as a conventional server, for example, configured for receiving input data from a user 102 or user interface 104 through one or more of the communication media 106. The intermediary agent 108 may also include an analysis module 108B, a planner module 108C, an extractor module 108D, and/or one or more other modules 108E that analyze the input data to select and fill an appropriate form, for example, responsive to the user 102 request, among other functions [0035].)
Tomasic does not explicitly teach, however, Crabtree teaches generating data using a large language model (Crabtree e.g. The system employs natural language processing, machine learning, and artificial intelligence models (e.g., large language models, diffusion models, variational autoencoders) to create, update, and align or harmonize ontologies from diverse data sources as well as maintain and select specialized narrowly tailored models, knowledge corpora, RAGs, and expert feedback for specialized queries or recommendations when prudent [0029]. A large language model (LLM) is a type of artificial intelligence algorithm that utilizes deep learning techniques and massively large datasets to understand, summarize, generate and predict new content [0183]. All language models are first trained on a set of data, and then they make use of various techniques to infer relationships and then generate new content based on the trained data. Language models are commonly used in natural language processing (NLP) applications where a user inputs a query in natural language to generate a result [0183].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Tomasic’s automated classification and extraction system with Crabtree’s semantic search system & AI platform that uses large language models in order to enhance the platform's overall reasoning, decision-making, and problem-solving capabilities, empowering users with intelligent and intuitive search experiences across various domains and applications (Crabtree e.g. [0029]).
As per claims 4 and 14 (Currently Amended), Tomasic in view of Crabtree teach the method of claim 3 and the system of claim 13, Tomasic in view of Crabtree teach wherein the matter classifier comprises one of a support vector machine, K-nearest neighbor classifier, Naive Bayes classifier, or a logistic regression classifier.
Tomasic teaches a matter classifier (Tomasic e.g. The learning system 108G may analyze the entire interaction and improve performance of the modules 108B, 108C, 108D, 108E by providing additional training examples [0048]. The base workflow classification algorithm or workflow selection model (including, e.g., form selection) uses a k-way boosted decision tree classifier [0050]. The planner 108C may use a backend model that consists of attribute & set of values pairs that result from extraction (that is, any extractor associated with an microform field may generate a set of extracted values from the input), classifiers that recognize references to attributes, a classifier that selects the correct workflow, and microforms with associated business logic to execute the workflow [0053].)
Tomasic does not explicitly teach that the matter classifier comprises one of a support vector machine, K-nearest neighbor classifier, Naive Bayes classifier, or a logistic regression classifier.
However, Crabtree teaches a classifier of one of a support vector machine, K-nearest neighbor classifier, Naive Bayes classifier, or a logistic regression (Crabtree e.g. Semantic search computing system 3300 can leverage model blending computing system 2125 of the platform 2120 to combine multiple models for better prediction of user intent from queries. This may comprise the development of multiple intent classification models using different machine learning algorithms, such as logistic regression, support vector machines (SVM), or deep learning architectures (e.g., convolutional neural networks, recurrent neural networks) [0328].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Tomasic’s automated classification and extraction system classifier with Crabtree’s semantic search system & AI platform use of different classification models such as logistic regression, support vector machines (SVM), or deep learning architectures for better prediction of user intent from queries (Crabtree e.g. [0328]).
As per claims 7 and 17 (Currently Amended), Tomasic in view of Crabtree teach the method of claim 1 and the system of claim 11, further comprising classifying, by a document classifier, the matter information into one or more classifications, wherein the query is based at least in part on the one or more classifications. (Tomasic e.g. Enhanced natural language analysis is provided by applying information extraction technology to accomplish a form of semantic parsing. Typically, every natural language system has as its core a lexicon, a grammar, and semantic rules. The semantic rules consist of a classifier and the business logic attached to the microforms [0046].The base workflow classification algorithm or workflow selection model (including, e.g., form selection) uses a k-way boosted decision tree classifier [0050]. The planner 108C may use a backend model that consists of attribute & set of values pairs that result from extraction (that is, any extractor associated with an microform field may generate a set of extracted values from the input), classifiers that recognize references to attributes, a classifier that selects the correct workflow, and microforms with associated business logic to execute the workflow [0053]. The following is a walkthrough of an example interaction with the agent 108. As shown in FIG. 6, the interaction begins when the user 102 sends the agent 108 a request including input data 602: “on Vehicle Stability People page, change name ‘Walt Evenson’ to ‘Walter Evenson’.” The analysis module 108B of the agent 108 executes a library of information extraction modules on the request and extracts data for the employee. These extractions are looked up in the database or the repository 108F to determine if they already exist. All of this information, plus the original request, is given to a classifier that selects the “Modify: Employee” workflow [0054]. In a first example, a user composes and submits a message 1902 to a search engine system or agent, as shown in the exemplary HTML page excerpt of FIG. 19. The system analysis component or module analyzes the message, perhaps based on static code, or on code based on machine learning with gold labels, to perform one or more of the following steps: extraction of strings from the message 1902, classification of the message 1902 to a form, and/or named entity recognition [0116].)
As per claims 9 and 19 (Original), Tomasic in view of Crabtree teach the method of claim 1 and the system of claim 11 , Tomasic teaches further comprising displaying, on the electronic display, a user interface comprising a matter title field and a matter description field both operable to receive the matter information from a user (Tomasic e.g. The input data may include one or more of the following kinds of communications, for example and without limitation: an e-mail, an instant message (IM), a pager message, a message recorded on a phone, a web page, a hyperlink, a short message service (SMS) message, a document, a publishing syndication field, a blog, a speech-to-text generated message, and/or any other form of communication (Figs. 1-3 and [0032]. In the embodiment of the invention shown in FIG. 2, the intermediary agent 108 may include a computer system 108A such as a conventional server, for example, configured for receiving input data from a user 102 or user interface 104 through one or more of the communication media 106 [0035]. FIG. 4 includes a sample screen display illustrating various operational aspects in accordance with embodiments of the invention; [0014]. FIGS. 6 through 11 include sample screen displays illustrating various operational aspects in accordance with embodiments of the invention[0016].).
As per claims 10 and 20 (Original), Tomasic in view of Crabtree teach the method of claim 1 and the system of claim 11, Tomasic teaches further comprising retrieving historical user data, wherein the query is based at least in part on the historical user data (Tomasic e.g. The input data may be supplied by the user 102 in natural language or free text form, for example, or may be in a constrained text format, such as if the intermediary agent 108 has been trained to recognize a certain format or syntax for the input data [0032]. The intermediary agent 108 may also include a learning system 108G that employs examples derived from historical performance of the agent 108 with regard to processing user 102 requests, such as historical form selection performance, for example [0036]. The system may be implemented using machine learning algorithms (e.g., learning system 108G) [0045]. Various embodiments of the invention demonstrate a form of interaction design wherein user 102 correction of agent 108 inferences for free text input is handled by using interaction with a form (or forms) that may already be part of the user 102 experience. In addition, enhanced natural language analysis is provided by applying information extraction technology to accomplish a form of semantic parsing. Typically, every natural language system has as its core a lexicon, a grammar, and semantic rules. In general, the implementation lexicon is composed of dictionaries, a log of past user 102 input data, and database strings [0046].).
As per claim 21 (New), Tomasic in view of Crabtree teach the method of claim 3, Tomasic teaches further comprising transmitting the intake matter form to an entity configured to autonomously perform a task according to the intake matter form. (Tomasic e.g. The intermediary agent 108 may also be configured to execute or process updates ( e.g., database updates) in accordance with receiving the input data or other communications [0036]. The agent 108 can be configured to accept a natural language input description of the intent of the user 102, analyze its contents with the analysis module 108B using semantic parsing, generate a representation of the intent, and then pass this information to the planner 108C. The planner 108C assembles a workflow that implements the intent by combining microforms into a dynamic form that is presented to the user 102 for approval [0044]. Finally, the user 102 may approve the workflow (see step 14) for execution of one or more updates (see step 6E). Since business logic can be implemented as microforms, users 102 are effectively executing complex workflow on the underlying database or other data structures when they approve a workflow (e.g., see step 14) that has been assembled by the planner 108C from microforms (Figs. 2-3 and [0044]).)
As per claim 22 (New), Tomasic in view of Crabtree teach the system of claim 13, Tomasic teaches wherein the intake matter form is computer-readable, and wherein the instructions further configure the one or more processors to transmit the intake matter form to an entity configured to autonomously perform a task according to the intake matter form. (Tomasic e.g. The intermediary agent 108 may also be configured to execute or process updates ( e.g., database updates) in accordance with receiving the input data or other communications [0036]. The agent 108 can be configured to accept a natural language input description of the intent of the user 102, analyze its contents with the analysis module 108B using semantic parsing, generate a representation of the intent, and then pass this information to the planner 108C. The planner 108C assembles a workflow that implements the intent by combining microforms into a dynamic form that is presented to the user 102 for approval [0044]. Finally, the user 102 may approve the workflow (see step 14) for execution of one or more updates (see step 6E). Since business logic can be implemented as microforms, users 102 are effectively executing complex workflow on the underlying database or other data structures when they approve a workflow (e.g., see step 14) that has been assembled by the planner 108C from microforms (Figs. 2-3 and [0044]).)
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Tomasic et al. (US 2006/0235691 A1) in view of Crabtree et al. (US 2024/0386015 A1) and in further view of Agarwal et al. (US 2023/0333867 A1).
As per claims 5 and 15 (Currently Amended), Tomasic in view of Crabtree teach the method of claim 3 and the system of claim 13, Tomasic in view of Crabtree and Agarwal teach wherein the one or more sources comprises a chat bot interface, and wherein the chat bot interface utilizes a chat bot model trained to generate one or more questions to elicit the input data from the user.
Tomasic teaches wherein the one or more sources comprises a user interface (Tomasic e.g. As applied herein, the term "user" may include any entity or multiple entities capable of communicating input data to a device such as a computer system configured for processing text input. For example, a "user" may be a human user, a computer system or an interface between a human user and a device or system configured to receive input data, such as text input. One example of a "user interface" is a conventional speech-to-text conversion module capable of receiving verbal communication from a human user and converting the verbal communication into text input [0020]. Communication of the input data by the user 102 may occur through a variety of user interfaces 104 including, for example and without limitation, a computer system 104A, a personal data assistant (PDA) 104B, a notebook 104C, a wireless or wireline variety of telephone 104D, and/or a speech-to-text conversion module 104E. It can be appreciated that any user interface 104 capable of providing suitable input data from the user 102 to the intermediary agent 108 may be employed within the scope of the invention (Fig. 3 and [0033]).
Tomasic nor Crabtree explicitly teach, however, Agarwal teaches a chat bot interface, and wherein the chat bot interface utilizes a chat bot model trained to generate one or more questions to elicit the input data from the user (Agarwal e.g. In accordance with various embodiments, a system and method for providing a natural language interface or conversation/chat interface for interacting with an automated software assistant (AA) and/or a human assistant (HA) when completing an electronic form is disclosed. For example, the electronic form may be a mortgage application soliciting input from the user in the form of questions wishing to obtain a mortgage loan. However, the presently described embodiments may be implemented in any electronic form used to obtain data [0043]. In one embodiment, the method is configured to present the user with a chat interface configured to guide the user through the fields of the form by asking them questions corresponding to the data fields of the electronic form [0044]. In particular, the chat interface provides user with AA configured to use a conversation (i.e., natural language questions and answers) format to elicit user reposes required to complete the form [0045]. The AA may use machine learning algorithms trained on existing HA and user interaction data to predict how the user is likely to feel about any particular topic or line of questioning [0050]. In some embodiments, the AA may utilize a machine learning classifier that is trained based at least in part on pairs of user statements in different parts of the workflow, expressing sentiment and participant responses to those statements of sentiment extracted from the prior message exchange threads [0054].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Sahni in view of Tomasic in view of Crabtree’s Case Assistant System and Automated Classification and Extraction System to include a source comprising a chat bot interface as taught by Agarwal in order to provide users with meaningful assistance as they attempt to complete an electronic form, improve customer experience, and successfully complete the task of submitting a form (Agarwal e.g. [0040] and [0050]).
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tomasic et al. (US 2006/0235691 A1) in view of Crabtree et al. (US 2024/0386015 A1) and in further view of Sahni et al. (US 2022/0277242 A1).
As per claims 6 and 16 (Original), Tomasic in view of Crabtree teach the method of claim 1 and the system of claim 11, Tomasic nor Crabtree explicitly teach, however, Sahni teaches further comprising converting the key phrases into one or more vectors, and the searching the one or more data sources comprises finding similarity between the one or more vectors of the key phrases with vectors of the electronic documents stored in the one or more data sources. (Sahni e.g. A case assistant is provided to client support professionals, which utilizes robotic process automation (RPA) technologies to analyze large amounts of data related to historical client cases that are similar to current open cases, data related to skilled experts associated with similar client cases, and data related to business exceptions (Abstract). Several processes are utilized to provide this data to client support professionals, including a document similarity finder that utilizes a vector data collector, a tokenizer, a stop word remover, a relevance finder, and a similarity finder, several of which utilize a variety of machine learning technologies. Additional processes include a skilled experts finder and a business exceptions finder (Abstract). In one embodiment, identifying historical closed cases that are similar to the current open case, and generating ranked similar case data further includes: aggregating, processing, and filtering the obtained current open case data to generate aggregated open case vector data; aggregating, processing, and filtering the obtained historical closed case data to generate aggregated closed case vector data;...providing one or more similarity processes with the weighted open case vector data and the weighted closed case vector data to generate a list of historical closed cases that are similar to the current open case; assigning each historical closed case in the list of similar historical closed cases a similarity ranking; and generating ranked similar case data [0021]. In one embodiment, vector data collector 218 is the core integration program that collects data from source systems for several defined vectors. In various embodiments, vector data collector 218 not only collects/aggregates data, but also filters and formats the collected data. In one embodiment, vector data collector 218 analyzes and processes the data to determine which data elements are needed by the various machine learning components utilized by document similarity finder 216 of case assistant system 102. In one embodiment, vector data collector 218 retrieves data from case intake and management database 242. In one embodiment, data retrieved by vector data collector 218 may include data such as, but not limited to, details of clients and client cases, as represented by current open case data 246, historical closed case data 248, and client data 244 [0103]. In one embodiment, all, or part of, the data stored in case intake and management database 242, including but not limited to, all, or part of, historical closed case data 248, current open case data 246, and client data 244 is provided to document similarity finder 216 for the purpose of identifying historical closed cases that are similar to current open cases being worked by client support professional 210 [0095].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Tomasic in view of Crabtree’s automated classification and extraction system with Sahni’s Case Assistant system that converts input data into vectors, searches data sources, and finds similarity between vectors in order to effectively and efficiently utilize the businesses' resources and data assets to improve the quality and speed of case resolution (Sahni e.g. [0003]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ayanna Minor whose telephone number is (571)272-3605. The examiner can normally be reached M-F 9am-5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.M./Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624