DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA and is in response to communications filed on 3/11/2026 in which claims 1-20 are presented for examination.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/11/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6-12, 14-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Janakira et al. US 20200372057 A1 (hereinafter referred to as “Janakira”) in view of Mielke et al. US 20230135179 A1 (hereinafter referred to as “Mielke”) and further in view of Lewis et al. US 20230350931 A1 (hereinafter referred to as “Lewis”).
As per claim 1, Janakira teaches:
A method comprising:
at a computing system, using network crawling to identify a plurality of example query expressions stored at various different locations on a network (Janakira, [0026] – The multi-order query result system generates domain-specific computer code segments by providing contextual data (e.g., domain-specific language) to a large language model. [0053] – The server(s) 104 comprise(s) a distributed server where the server(s) 104 include(s) a number of server devices distributed across the network 112 and located in different physical locations. The server(s) 104 can comprise one or more content servers, application servers, container orchestration servers, communication servers, web-hosting servers, machine learning server, and other types of servers. [0068]),
the example query expressions in the plurality of example query expressions having one or more specific data access functions, specific data analytics functions, or specific data enrichment functions, and having specific arguments (Janakira, [0005] – The disclosed systems can further execute the generated computer code for the first context-defining query subcomponent to access the indicated first contextual data source for generating a first result to the first context-defining query subcomponent. [0023]);
storing the plurality of example query expressions in storage at the computing system (Janakira, [0043] – The server(s) 104 may generate, track, store, process, receive, and transmit electronic data, such as multi-order text queries, results, actions, determinations, responses, query-component-specific computer code, interactions with interface elements, and/or interactions between user accounts or client devices);
transmitting, over a network connection the example query expressions to a large language model system (Janakira, [0026] – Specifically, the multi-order query result system provides or transmits the multi-order text query to a large language model);
providing from the computing system, over the network connection, a centrally managed ontology to the large language model system (Janakira, [0023] – Result system can generate a response to this multi-order text query by accessing a first contextual data source corresponding to scheduled meetings for the user account and a second contextual data source corresponding to a user account ontology, and by further generating a result (e.g., a determination) that combines data from both contextual data sources to indicate a date and time);
providing, over the network connection, at least a portion of the plurality of the annotated skills in the ontologically typed graph to the large language model system (Janakira, [0005] – The disclosed systems provide or transmit the first context-defining query subcomponent to a large language model for domain-specific computer code pertaining to the first context-defining query subcomponent); and
Janakira doesn’t explicitly teach annotations, ontological types, or suggestions to the user for user confirmation, however, Mielke teaches:
receiving from the large language model system, a plurality of annotated skills (Mielke, [0261] – The assistant system 140 may use a large language model (e.g., like GPT-3) as a chat bot/user simulator to perform QA test on assistant updates. [0099] – The entity resolution module 212 may additionally extract features from contextual information, which is accessed from dialog history between a user and the assistant system 140. The entity resolution module 212 may further conduct global word embedding, domain-specific embedding, and/or dynamic embedding based on the contextual information. The processing result may be annotated with entities by an entity tagger. Based on the annotations, the entity resolution module 212 may generate dictionaries. In par),
the annotated skills being genericized versions of the example query expressions (Mielke, [0113] – The NLG component 372 may use different language models and/or language templates to generate natural-language outputs. The generation of natural-language outputs may be application specific. The generation of natural-language outputs may be also personalized for each user, wherein the templates output for use by each individual user is interpreted as an annotated genericized versions of the query expressions),
normalized to the centrally managed ontology (Mielke, [0099] – The entity resolution module 212 may tokenize text by text normalization, extract syntax features from text, and extract semantic features from text based on NLP),
using the skill ontological types, storing an ontologically typed graph having skills in the plurality of skills coupled to each other through in-common, normalized, ontological types (Mielke, [0091] – Intents that are common to multiple domains may be processed by the meta-intent classifier);
receiving over the network, from the large language model system a message indicating an investigation skill to be invoked (Mielke, [0077] – These actions may include providing information or suggestions to the user. In particular embodiments, the actions may interact with agents 228a/b, users, and/or the assistant system 140 itself. These actions may comprise actions including one or more of a slot request, a confirmation, a disambiguation, or an agent execution. [0116] – Results may be forwarded to the arbitrator).
It would have been obvious for one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Janakira’s invention in view of Mielke in order to include annotations, ontological types, and suggestions to the user for user confirmation. These are advantageous because annotations allow for classification of particular items within ontologies, and allow for preferences from users' historical behavior and accurately suggest interactions that they may value, generate highly predictive proactive suggestions based on micro-context understanding (Mielke, paragraph [0088]).
Janakira as modified doesn’t teach that specific instances of ontological types are replaced with genericized, non-specific ontological types, however, Lewis teaches:
wherein ontological entities comprising specific instances of ontological types having applied values in the example query expressions are replaced with corresponding genericized ontological types (Lewis, [0136] – These ML model(s) may be replaced by an ML model configured for generating a set of entities and relationships thereto based on the expanded search query and a corpus of text 40… The ML model may be configured for predicting and/or identifying from the corpus of text a set of entity pairs and relationships associated with a set of entities associated with the search query, each predicted/identified entity pair comprising an entity of a first type and an entity of a second type having an associated relationship between identified from the corpus of text 402),
such that the annotated skills, comprising genericized skill ontological types, are reusable across a plurality of investigations by application of the genericized ontological types to investigation contexts different from those of the example query expressions (Lewis, [0085] – An ML model may be configured to expand the search query by genericising and/or specificising the entities, entity concepts, terms of the search query and using these for expanding the search query. For example, the ML model may be generated from an ML technique by specific training data instances or labelled training data items from a training dataset for, by way of example only but not limited to, biological entities and/or relationships thereto, wherein expnanding the search is interpreted as reusing the types across investigations);
It would have been obvious for one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Janakira’s invention as modified in view of Lewis in order to generalize types within an ontology. This is advantageous because the set of entity pair and relationships may be used for, without limitation, for example updating and/or building knowledge graphs (Lewis, paragraph [0136]).
As per claim 6, Janakira as modified teaches:
The method of claim 1, wherein the plurality of example query expressions comprises functionality for generating a log (Mielke, [0099] – The user profile data may also include user interests and preferences on a plurality of topics, aggregated through conversations on news feed, search logs, messaging platforms, etc.).
As per claim 7, Janakira teaches:
A method comprising:
at a large language model system, consuming a plurality of example query expressions, the example query expressions in the plurality of example query expressions comprising one or more specific data access functions, specific data analytics functions, or specific data enrichment functions, and having specific arguments (Janakira, [0005] – The disclosed systems can further execute the generated computer code for the first context-defining query subcomponent to access the indicated first contextual data source for generating a first result to the first context-defining query subcomponent. [0023]);
at the large language model system receiving a centrally managed ontology (Janakira, [0023] – Result system can generate a response to this multi-order text query by accessing a first contextual data source corresponding to scheduled meetings for the user account and a second contextual data source corresponding to a user account ontology, and by further generating a result (e.g., a determination) that combines data from both contextual data sources to indicate a date and time);
Janakira doesn’t explicitly teach annotations, ontological types, or suggestions to the user for user confirmation, however, Mielke teaches:
the large language model system identifying skill ontological types from the example query expressions, the skill ontological types being related to at least one of input arguments or structured output, the skill ontological types being normalized to the centrally managed ontology, and genericized (Mielke, [0099] – The entity resolution module 212 may tokenize text by text normalization, extract syntax features from text, and extract semantic features from text based on NLP);
the large language model system providing the plurality of annotated skills, over a network connection, to an external computing system (Mielke, [0099] – The processing result may be annotated with entities by an entity tagger. [0110] – Third-party agents may comprise external agents that the assistant system 140 has no control over (e.g., third-party online music application agents, ticket sales agents));
the large language model system receiving context for an investigation (Janakira, [0072] – Specifically, the first context-defining query subcomponent 402 indicates a query type with context relating to the basic details of the user);
the large language model system identifying a context ontological type from the context, using the centrally managed ontology (Mielke, [0099] – The entity resolution module 212 may tokenize text by text normalization, extract syntax features from text, and extract semantic features from text based on NLP);
the large language model providing the context ontological type to the computing system, over the network connection (Mielke, [0049] – The social-networking system 160 may provide users with the ability to take actions on various types of items or objects, supported by the social-networking system);
the large language model system receiving received skills, from the plurality of annotated skills, over the network, from the computing system, based on correlation between a skill ontological type and the context ontological type (Mielke, [0095] – If the user sets a nickname for a contact on one device, all devices may synchronize and get that nickname based on the AUM 354. In particular embodiments, the AUM 354 may first prepare events, user state, reminder, and trigger state for storing in a data store. Memory node identifiers (ID) may be created to store entry objects in the AUM 354, where an entry may be some piece of information about the user (e.g., photo, reminder, etc.) As an example and not by way of limitation, the first few bits of the memory node ID may indicate that this is a memory node ID type, the next bits may be the user ID, and the next bits may be the time of creation. [0096] – For contextual entities, the entity resolution module 212 may perform coreference based on information from the context engine 220 to resolve the references to entities in the context, such as “him”, “her”, “the first one”, or “the last one”. [0253] – Once we have the action graph ontology consisting of entities with aliases and actions, we may easily synthesize utterances examples, wherein this is interpreted as correlating skill and context ontological types); and
as a result, the large language model system, using the trained model, producing and providing a message of a suggested skill for the investigation (Mielke, [0077] – These actions may include providing information or suggestions to the user. In particular embodiments, the actions may interact with agents 228a/b, users, and/or the assistant system 140 itself. These actions may comprise actions including one or more of a slot request, a confirmation, a disambiguation, or an agent execution. [0116] – Results may be forwarded to the arbitrator).
It would have been obvious for one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Janakira’s invention in view of Mielke in order to include annotations, ontological types, and suggestions to the user for user confirmation. These are advantageous because annotations allow for classification of particular items within ontologies, and allow for preferences from users' historical behavior and accurately suggest interactions that they may value, generate highly predictive proactive suggestions based on micro-context understanding (Mielke, paragraph [0088]).
Janakira as modified doesn’t teach that specific instances of ontological types are replaced with genericized, non-specific ontological types, however, Lewis teaches:
the large language model system generating a plurality of annotated skills, the annotated skills in the plurality of annotated skills being genericized versions of the example query expressions, wherein ontological entities comprising specific instances of ontological types having applied values in the example query expressions are replaced with corresponding genericized ontological types normalized to the centrally managed ontology (Lewis, [0136] – These ML model(s) may be replaced by an ML model configured for generating a set of entities and relationships thereto based on the expanded search query and a corpus of text 40… The ML model may be configured for predicting and/or identifying from the corpus of text a set of entity pairs and relationships associated with a set of entities associated with the search query, each predicted/identified entity pair comprising an entity of a first type and an entity of a second type having an associated relationship between identified from the corpus of text 402),
such that the annotated skills are reusable across a plurality of investigations by application of the genericized ontological types to investigation contexts different from those of the example query expressions (Lewis, [0085] – An ML model may be configured to expand the search query by genericising and/or specificising the entities, entity concepts, terms of the search query and using these for expanding the search query. For example, the ML model may be generated from an ML technique by specific training data instances or labelled training data items from a training dataset for, by way of example only but not limited to, biological entities and/or relationships thereto, wherein expnanding the search is interpreted as reusing the types across investigations);
It would have been obvious for one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Janakira’s invention as modified in view of Lewis in order to generalize types within an ontology. This is advantageous because the set of entity pair and relationships may be used for, without limitation, for example updating and/or building knowledge graphs (Lewis, paragraph [0136]).
As per claim 8, Janakira as modified teaches:
The method of claim 7, wherein receiving context for the investigation (Janakira, [0096] – Utilizing the internal context engine to perform an act 800 of content aggregation. For instance, the multi-order query result system 102 utilizes the internal context engine to aggregate content from multiple data sources);
identifying the context ontological type from the context (Janakira, [0100] – Utilizes the expert identification engine to aggregate content from multiple data sources. In addition to aggregating content, the multi-order query result system 102 also performs an act 902 of extraction. In particular, the act 902 includes an act 904 of the multi-order query result system 102 determining experts associated with certain topics. Moreover, the multi-order query result system 102 performs an act 906 of storing the extracted expert identification within data sources);
receiving received skills (Mielke, [0105] – A goal may be represented by an identifier (e.g., string) with one or more named arguments, which parameterize the goal); and
providing a message of a suggested skills for the investigation are performed recursively (Mielke, [0104] – A slot resolution component may then recursively resolve the slots in the update operators with resolution providers including the knowledge graph and domain agents. [0116] – Results may be forwarded to the arbitrator).
As per claim 9, Janakira as modified teaches:
The method of claim 7, further comprising receiving a schema, and wherein generating a plurality of annotated skills is performed using the schema (Mielke, [0124] – One category may be a basic task schema which comprises the basic identification information such as ID, name, and the schema of the input arguments).
As per claim 10, Janakira as modified teaches:
The method of claim 7, the large language model system receiving an investigation goal, and wherein providing the message of the suggested skill for the investigation is performed using the investigation goal (Janakira, [0005] – The first context-defining query subcomponent indicates a first contextual data source for responding to the first context-defining query subcomponent. Further, the disclosed systems provide or transmit the first context-defining query subcomponent to a large language model for domain-specific computer code pertaining to the first context-defining query subcomponent. The disclosed systems can further execute the generated computer code for the first context-defining query subcomponent to access the indicated first contextual data source for generating a first result to the first context-defining query subcomponent, wherein to access the contextual data source is interpreted as the investigation goal).
As per claim 11, Janakira as modified teaches:
The method of claim 7, wherein at least one of the plurality of example query expressions is configured to generate a log (Mielke, [0099] – The user profile data may also include user interests and preferences on a plurality of topics, aggregated through conversations on news feed, search logs, messaging platforms, etc.).
As per claim 12, Janakira as modified teaches:
The method of claim 7, wherein at least one of the plurality of example query expressions is configured to generate a table (Mielke, [0243] – Table 10 shows all question/answer pairs for which the calibrator believes the answers are more likely right than wrong).
As per claim 14, Janakira as modified teaches:
The method of claim 7, wherein at least one of the plurality of example query expressions is configured to invoke an API skill (Janakira, [0083] – Specifically, the API 502 includes a prompt API layer for real-time data extraction, pre-processing, model selection, and formatting of prompts for further providing to the intelligence layer).
As per claim 15, Janakira as modified teaches:
The method of claim 7, wherein providing a message of the suggested skill for the investigation comprises providing a message of a combined query comprising a plurality of skills invoked together (Janakira, [0027] – The multi-order query result system further generates an aggregated result to the multi-order text query by combining (e.g., compiling or summarizing) component-specific results generated via each computer code segment).
As per claim 16, Janakira as modified teaches:
The method of claim 7, wherein identifying the skill ontological types or the context ontological type comprises adapting native ontological types to normalize the skill ontological types to the centrally managed ontology (Mielke, [0098] – The entity resolution module 212 may first expand names associated with a user into their respective normalized text forms as phonetic consonant representations which may be phonetically transcribed using a double metaphone algorithm. [0099] – The entity resolution module 212 may tokenize text by text normalization, extract syntax features from text, and extract semantic features from text based on NLP. [0121] – The entity resolution module 212 may use different techniques to resolve the entities, including accessing user memory from the assistant user memory (AUM) 354. In particular embodiments, the AUM 354 may comprise user episodic memories helpful for resolving the entities by the entity resolution module 212. The AUM 354 may be the central place for storing, retrieving, indexing, and searching over user data).
As per claim 17, Janakira as modified teaches:
The method of claim 7, further comprising performing a shortest path analysis, and wherein providing a message of the suggested skill for the investigation comprises providing a message of a skill identified in the shortest path analysis (Mielke, [0326] – The degree of separation between two objects represented by two nodes, respectively, is a count of edges in a shortest path connecting the two nodes in the social graph. [0116] – Results may be forwarded to the arbitrator).
Claims 18 and 20 are directed to a system performing steps recited in claims 1 and 3 with substantially the same limitations. Therefore, the rejections made to claims 1 and 3 are applied to claims 18 and 20.
Claims 2-5, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Janakira in view of Mielke in view of Lewis and further in view of Tonkin et al. US 20200372057 A1 (hereinafter referred to as “Tonkin”).
As per claim 2, Janakira as modified teaches:
The method of claim 1, further comprising:
providing context to the large language model system (Janakira, [0022] – Utilizing a context orchestration engine integrated with a large language model);
receiving from the large language model system a context ontological type (Janakira, [0072] – Specifically, the first context-defining query subcomponent 402 indicates a query type with context relating to the basic details of the user),
the context ontological type being normalized to the centrally managed ontology (Mielke, [0099] – The entity resolution module 212 may tokenize text by text normalization, extract syntax features from text, and extract semantic features from text based on NLP);
Janakira as modified doesn’t explicitly teach pruning the ontologically typed graph, however, Tonkin teaches:
using the context ontological type (Tonkin, [0120] – An aligner module that determines alignment between ontology terms different ontologies), pruning the ontologically typed graph to store a pruned graph (Tonkin, [0120] – A pruner module that determines a group of ontology terms within at least one ontology at least in part using relationships between the ontology terms and a semantic matcher module that identifies ontology term meanings. [0142] – The selected ontology terms are used by the pruner module to prune either the source and/or target ontology).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janakira’s invention as modified in view of Tonkin in order to prune the ontology graph; this is advantageous because this allows the user to select only those parts of the ontology that are of interest, with the processing system then selecting additional ontology terms required to maintain relationships between the selected ontology (Tonkin, paragraph [0142]).
As per claim 3, Janakira as modified with Tonkin teaches:
The method of claim 2, wherein providing context to the large language model system comprises providing an initial context and investigation goal to the large language model system (Janakira, [0005] – The first context-defining query subcomponent indicates a first contextual data source for responding to the first context-defining query subcomponent. Further, the disclosed systems provide or transmit the first context-defining query subcomponent to a large language model for domain-specific computer code pertaining to the first context-defining query subcomponent. The disclosed systems can further execute the generated computer code for the first context-defining query subcomponent to access the indicated first contextual data source for generating a first result to the first context-defining query subcomponent, wherein to access the contextual data source is interpreted as the investigation goal).
As per claim 4, Janakira as modified with Tonkin teaches:
The method of claim 2, wherein providing at least a portion of the plurality of annotated skills in the ontologically typed graph is performed by providing the pruned graph (Tonkin, [0492] – It is initially loaded by parsing an ontology and obtaining the classes, their annotations, class structure and any ‘part-of’ Object properties), and
wherein the investigation skill to be invoked is from the pruned graph (Tonkin, [0222] – Pruning process can end with the selected and related ontology terms identified being used the define the pruned ontology at step 925, which can be stored as a pruned ontology or pruned index. [0260] – The Pruner module takes an ontology and allows a user to specify which classes, data properties, object properties and axioms they wish to retain. Using those retained the Pruner module checks to see that the relational and axiomatic integrity defined in the ontology is maintained).
As per claim 5, Janakira as modified with Tonkin teaches:
The method of claim 2, further comprising:
invoking a skill (Tonkin, [0324] – By selecting the ‘Search’ option on a Shares Class screen a query is issued to return all the data properties for that class but only those owned by John Doe. The filter has been transformed by the generated application 1503 into a SPARQL or functionally equivalent query which can be executed against the data stored in the database); and
wherein providing context to the large language model system comprises providing a context created as a result of invoking the skill (Tonkin, [0347] – The indexer module automatically creates a set of indexes of the terms used in a collection of one or more ontologies to assist a user to browse an ontology and to expedite the querying of data defined by an ontology These indexes are used by the other modules to assist in the alignment, pruning and browsing of ontologies. [0539] – The database can be updated by the user at any time to add new contexts).
Claim 19 is directed to a method performing steps recited in claim 2 with substantially the same limitations. Therefore, the rejection made to claim 2 is applied to claim 19.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Janakira in view of Mielke in view of Lewis and further in view of Portisch et al. US 20220237185 A1 (hereinafter referred to as “Portisch”).
As per claim 13, Janakira as modified doesn’t explicitly teach a database view, however, Portisch teaches:
The method of claim 7, wherein at least one of the plurality of example query expressions is configured to generate a database view (Portisch, [0022] – The source table 112 can represent a database view defined by a query of the database 110. For example, a database view can contain a subset of data retrieved from one database table using a query command, or combine data retrieved from two or more database tables (e.g., using joins)).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janakira’s invention as modified in view of Portisch in order to prune the ontology graph; this is advantageous it allows the system to enrich data in a simplified format with table organization (Portisch, paragraph [0022]).
Response to Arguments
Applicant’s arguments with respect to claims have been generally considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The prior cited art of Foody has been replaced with Lewis et al.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Caufield et al. "Structured Prompt Interrogation and Recursive Extraction of Semantics (SPIRES): A Method for Populating Knowledge Bases Using Zero-Shot Learning", April 5, 2023, pgs. 1-13.
Panineerkandy et al. US 20210349925 A1 teaches retrieving results and responses with context based exclusion criteria using ontology graph pruning ([0043]).
Mittal et al. US 20180276273 A1 teaches generating one or more domain-driven interpretations of a natural language dialogue query provided by a user via utilization of a web ontology language; determining multiple structured base queries, from among a stored collection of structured queries, that correspond to the natural language dialogue query, in view of the one or more generated domain-driven interpretations (Abstract).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew Ellis whose telephone number is (571)270-3443. The examiner can normally be reached on Monday-Friday 8AM-5PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached on (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
March 23, 2026
/MATTHEW J ELLIS/Primary Examiner, Art Unit 2152