Prosecution Insights
Last updated: April 19, 2026
Application No. 18/759,767

TRAINED MULTI-DOMAIN LANGUAGE MODEL FOR CONTENT MODERATION OF A PRIMARY LANGUAGE MODEL

Non-Final OA §101§103
Filed
Jun 28, 2024
Examiner
MASTERS, KRISTEN MICHELLE
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
87%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
25 granted / 40 resolved
+0.5% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action This communication is in response to the Application filed on 6/28/2024. Claims 1-20 are pending and have been examined. Claims 1-20 are rejected Claims 1, 14, and 20 are independent are method, system, and method claims, respectively. Apparent priority: 6/28/2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent Claims are directed to statutory categories: Claim 1 is a method claim and directed to the machine or manufacture category of patentable subject matter. Claim 9 is a system claim and directed to the process category of patentable subject matter. Claim 17 is a Method claim and is directed to the machine or manufacture category of patentable subject matter. Regarding Independent Claim 1, Claim 1 recites, “1. A method comprising: receiving a query for a primary language model; [This relates to a human receiving a query using auditory systems.] applying a server controller to the query to generate an inference prompt and to identify a query domain; [This relates to a human generating an inference prompt using pen and paper and identify a query domain using natural language processing and logic in the human mind.] applying a trained multi-domain language model to the inference prompt according to the inference prompt and the query domain to generate an output decision; and [This relates to a human generating an output decision using natural language processing and logic in the human mind.] routing the query to a routing process according to the output decision. [This relates to a human routing a query using logic and reasoning.] Regarding independent Claim 14, Claim 14 is a system claim with limitations similar to that of claim 1 and is rejected under the same rationale. Regarding independent Claim 20, Claim 20 recites, “20. A method comprising: receiving a query for a primary language model; [This relates to a human receiving a query using auditory systems.] applying a server controller to the query to generate an inference prompt and to identify a query domain; [This relates to a human generating an inference prompt using pen and paper and identify a query domain using natural language processing and logic in the human mind.] selecting a selected set of domain adapter layers from among a set of domain general adapter layers and a plurality of sets of domain specific adapter layers of a trained multi-domain language model, [This relates to a human selecting layers using logic and reasoning.] wherein the trained multi-domain language model further comprises a set of base layers separate from the set of domain general adapter layers and the plurality of sets of domain specific adapter layers; [This relates to a sequence of data transformations and mathematical computations] applying the trained multi-domain language model to the query according to the inference prompt and the query domain to generate an output decision, [This relates to a human generating a generating a decision in the human mind.] wherein applying the trained multi-domain language model further comprises: applying the inference prompt to the set of base layers, the set of domain general adapter layers, and the plurality of sets of domain specific adapter layers, multiplying, by zero, outputs of the set of domain general adapter layers and the plurality of sets of domain specific adapter layers, other than the selected set of domain adapter layers, [This relates to a sequence of data transformations and mathematical computations applied to a prompt.] combining, into a combined output, a selected output of the selected set of domain adapter layers with a base output of the set of base layers, and [This relates to a human combining outputs of a selected output of the selected set of domain adapter layers with a base output of the set of base layers using pen and paper.] generating structured text, containing a content moderation prediction and the output decision, based on the combined output; [This relates to a human generating structured text using pen and paper.] and routing the query to a routing process according to the output decision, [This relates to a human routing a query using logic and reasoning.] wherein routing further comprises blocking or permitting the query from reaching the primary language model according to the structured text. [This relates to a human blocking or permitting a query using logic and reasoning according to the structured text.] The Dependent Claim does not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea. This judicial exception is not integrated into a practical application. In particular, claims1, 14, and 20 recites additional elements of “processor” “data repository” and “server controller”. For example, in [0008] “One or more embodiments also provide for a system. The system includes a processor and a data repository in communication with the processor. The data repository stores a query for a primary language model. The data repository also stores an inference prompt. The data repository also stores a query domain. The data repository also stores an output decision. The system also includes a server controller which, when executed by the processor receives the query and generates the inference prompt and identifies the query domain. The system also includes a trained multi-domain language model which, when executed by the processor, generates the output decision. The system also includes a routing process which, when executed by the processor, routes the query according to the output decision.” And [0026] “The system shown in FIG. 1 includes a data repository (100). The data repository (100) is a type of storage unit or device (e.g., a file system, database, data structure, or any other storage mechanism) for storing data. The data repository (100) may include multiple different, potentially heterogeneous, storage units and/or devices…” “A prompt may include a reference to the data to be acted upon (e.g., point to a data repository or non-transitory computer readable storage medium where data is stored)….” Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of processor data repository server controller is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. As to Claim 2, Claim 2 recites: 2. The method of claim 1, wherein the trained multi-domain language model comprises a set of base layers, a set of domain general adapter layers, and a plurality of sets of domain specific adapter layers, and wherein the method further comprises: selecting, prior to applying the trained multi-domain language model to the inference prompt, a selected set of domain adapter layers from among the set of domain general adapter layers and the plurality of sets of domain specific adapter layers. [This relates to a sequence of data transformations and mathematical computations.] No additional limitations present. As to Claim 3, Claim 3 recites: 3. The method of claim 2, wherein applying the trained multi-domain language model further comprises: applying the inference prompt to the set of base layers, the set of domain general adapter layers, and the plurality of sets of domain specific adapter layers, multiplying, by zero, outputs of the set of domain general adapter layers and the plurality of sets of domain specific adapter layers, other than the selected set of domain adapter layers, combining, into a combined output, a selected output of the selected set of domain adapter layers with a base output of the set of base layers, wherein the combined output comprises generated text containing a content moderation prediction and the output decision, and decoding the combined output to generate decoded output, wherein routing comprises blocking or permitting the query according to the decoded output. [This relates to a sequence of data transformations and mathematical computations.] No additional limitations present. As to Claim 4, Claim 4 recites: 4. The method of claim 1, wherein the routing process comprises: blocking, responsive to the output decision comprising a block decision, the query from the primary language model. [This relates to a human blocking a query using voice or pen and paper.] No additional limitations present. As to Claim 5, Claim 5 recites: 5. The method of claim 1, wherein the routing process comprises: blocking, responsive to the output decision comprising a block decision, the query from the primary language model, and transmitting an error message to a user device from which the query was received. [This relates to a human blocking a query using voice or pen and paper.] A device is noted as an additional limitation. As to Claim 6, Claim 6 recites: 6. The method of claim 1, wherein the routing process comprises: transmitting, responsive to the output decision comprising a pass decision, the query to the primary language model. [This relates to a human transmitting a query by speech or by pressing a button.] No additional limitations present. As to Claim 7, Claim 7 recites: 7. The method of claim 1, wherein the routing process comprises: transmitting, responsive to the output decision comprising a pass decision, the query to the primary language model, [This relates to a human transmitting a query by speech or by pressing a button.] applying the primary language model to the query to generate a primary language model output, and transmitting the primary language model output to a user device. [This relates applying a language model to a query.] A device is noted as an additional limitation. As to Claim 8, Claim 8 recites: 8. The method of claim 1, wherein the routing process comprises: modifying the query to generate a modified query, and transmitting the modified query to the primary language model. [This relates to a human modifying a query using natural language understanding and pen and paper.] No additional limitations present. As to Claim 9, Claim 9 recites 9. The method of claim 1, wherein applying the server controller to the query to generate the inference prompt comprises: retrieving a general inference prompt, and using the general inference prompt as the query. [This relates to a human retrieving a general inference prompt, and using the general inference prompt as the query using visual processing or pen and paper. A server controller is noted as an additional limitation. As to Claim 10, Claim 10 recites 10. The method of claim 1, wherein applying the server controller to the query to generate the inference prompt comprises: retrieving a general inference prompt, [This relates to a human retrieving a prompt using auditory processing or pen and paper.] selecting a selected domain for the query, [This relates to a selecting a domain using pen and paper.] retrieving a domain specific prompt according to the selected domain, [This relates to a human retrieving a domain prompt vision or using pen and paper.] combining the general inference prompt and the domain specific prompt into a combined prompt, and using the combined prompt as the inference prompt. [This relates to a human combining prompts using pen and paper.] A server controller is noted as an additional limitation. As to Claim 11. Claim 11 recites 11. The method of claim 1, wherein applying the server controller to the query to identify the query domain comprises: applying the query to the trained multi-domain language model, and [This relates to a human applying a query to a model in the human mind.] receiving, as an additional output of the trained multi-domain language model, the query domain. [This relates to a human receiving a query using vision or pen and paper.] A server controller is noted as an additional limitation. As to Claim 12. Claim 12 recites 12. The method of claim 1, wherein applying the server controller to the query to identify the query domain comprises: identifying an application identity associated with the query, and [This relates to a human identifying an application identity associated with the query using logic and natural language understanding in the human mind.] assigning the query domain according to the application identity. [This relates to a human assigning the query domain using pen and paper.] A server controller is noted as an additional limitation. As to Claim 13. Claim 13 recites 13. The method of claim 1, wherein the trained multi-domain language model comprises a set of base layers having a plurality of pretrained weights, and further comprises a plurality of sets of domain specific adapter layers including a selected set of domain adapter layers selected according to the query domain, and wherein the method further comprises: passing the query through the set of base layers to generate a base output, passing the query through the plurality of sets of domain specific adapter layers to generate a plurality of domain adapter layer outputs, discarding, other than a selected output of the selected set of domain adapter layers, each of the plurality of domain adapter layer outputs, wherein the selected output is retained, and combining the base output and the selected output to generate the output decision. [This relates to a sequence of data transformations and mathematical computations.] No additional limitations present. As to Claim 15. Claim 15 recites 15. The system of claim 14, further comprising: the primary language model. [This relates to a human augmenting the stack by including additional context data using pen and paper.] No additional limitations present. As to Claim 16, Claim 16 is a System Claim with limitations similar to that of claim 2 and is rejected under the same rationale. As to Claim 17. Claim 17 recites 17. The system of claim 16, wherein the trained multi-domain language model further: applies the inference prompt to the set of base layers, the set of domain general adapter layers, and the plurality of sets of domain specific adapter layers, multiplies, by zero, outputs of the set of domain general adapter layers and the plurality of sets of domain specific adapter layers, other than the selected set of domain adapter layers, combines, into a combined output, a selected output of the selected set of domain adapter layers with a base output of the set of base layers, and [This relates to a sequence of data transformations and mathematical computations.] generates structured text, containing a content moderation prediction and the output decision, based on the combined output, wherein routing comprises blocking or permitting the query according to the structured text. [This relates to a human generating text using pen and paper.] No additional limitations present. As to Claim 18, Claim 18 is a System Claim with limitations similar to that of claim 4 and is rejected under the same rationale. As to Claim 19, Claim 19 is a System Claim with limitations similar to that of claim 7 and is rejected under the same rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4-12, 14-16, 18, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Baruch (U.S. Patent Number US 12572737 B2), in view of Biadsy (U.S. Patent Number US 20180053502 A1). Regarding Claim 1, Baruch teaches 1. A method comprising: receiving a query for a primary language model; (See Baruch (3:58-65) “(23) A generative language model is a particular type of generative model that generates new text in response to model input. The model input includes a task description, also referred to as a prompt. The task description can include instructions and/or examples of digital content. A task description can be in the form of natural language text, such as a question or a statement, and can include non-text forms of content, such as digital imagery and/or digital audio.”) applying a server controller to the query to generate an inference prompt (See Baruch (9:50-65) “(48) The AI subsystem 1108 receives input signals 106 from potentially a variety of different data sources including user interfaces, databases and other types of data stores, including online, real-time, and/or offline data sources. In the example of FIG. 1, input signals 106 include user input signals 102, user profile signals 104, and graph-based signals 110, 112. In the illustrative example of FIG. 1, user input signals 102 are received via one or more user devices or systems, such as portable user devices like smartphones, wearable devices, tablet computers, or laptops; user profile signals 104 re received via one or more web servers; and graph-based signals 110, 112 are received via one or more database servers; however, any of the different types of input signals 106 can be received by thought starter generation system 100 via any type of electronic machine, device or system.”) and to identify a query domain; (See Baruch (10:30-45) “(51) Alternatively or in addition, input signals 106 include user profile data 104. Examples of user profile data 104 include user experience, interests, areas of expertise, educational history, job titles, skills, job history, etc. User profile data 104 can be obtained by the thought starter generation system 100 by, for example, querying one or more data stores (examiner interprets domains as “profile data”) that store user profile data for the application software system or user network 134. (52) Input signals 106 alternatively or additionally include data extracted from entity graph 110 and/or knowledge graph 112. The entity graph 110 includes entity data arranged according to a connection graph, e.g., a graph of connections and relationships between users of the user connection network and between users and other entities….”) routing the query to a routing process according to the output decision. (see Baruch (60:11-40) “(287) In some implementations, the generative model 1206 is pre-trained on a large corpus (e.g., millions of training examples) and can be re-trained or fine-tuned for particular applications or domains. Model trainer 1202 creates training data based on the prompt-feedback pairs 1212 and/or output-feedback pairs 1214 received from feedback processor 1210. The training data created by model trainer 1202, e.g., training prompt-output pairs 1204, is used to train or fine tune the generative model 1206 using, for example, supervised machine learning or semi-supervised machine learning. An instance of training data includes ground-truth data for a given prompt-output pair, where the ground-truth data includes, for example, a reward score, a classification, or a label generated by feedback processor 1210 in communication with one or more feedback subsystems such as pre-distribution feedback subsystem 1218 or post-distribution feedback subsystem 1228. In a training or fine tuning mode, the generative model 1206 is applied to the training prompt-output pairs 1204 and one or more model parameters of the generative model 1206 are updated based on the training or fine tuning. Alternatively or in addition, the architecture of the generative model 1206 can be re-engineered based on new instances of training data or based on a new application or domain. In an operational mode, the generative model 1206 generates output in response to prompts. The prompt-output pairs 1208 generated by the generative model 1206 are processed by feedback processor 1210 to create prompt-feedback pairs 1212 and/or output-feedback pairs 1214 when the feedback processor 1210 receives feedback related to the respective prompt-output pairs 1208.”) Baruch does not specifically teach applying a trained multi-domain language model to the inference prompt according to the inference prompt and the query domain to generate an output decision; and However, Biadsy does teach this limitation (See Biadsy, [0005] A language model may include one or more domain-specific model components corresponding to different domains or types of non-linguistic context data. The language model can also include a baseline model component that can operate independent of non-linguistic context data. The baseline model component and the one or more domain-specific model components can be used together to determine a score for a language sequence using both linguistic and non-linguistic context information.”) (See Biadsy, [0007] Domains can represent various different aspects of non-linguistic context. For example, a domain may represent a location (e.g., being located in a particular country, a particular city, or other location), a user characteristic (e.g., that the user is male or female, the user speaks a particular dialect, etc.), an application running on a device (e.g., a maps application, an email application, etc.), a time (e.g., a particular day, a time of day, a weekend or weekday, etc.), a device status (e.g., in a vehicle, moving or not moving, etc.), or another aspect of non-linguistic context.”) Baruch and Biadsy are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Baruch to incorporate applying a trained multi-domain language model to the inference prompt according to the inference prompt and the query domain to generate an output decision; and of Biadsy. This allows domain-specific components to have a meaningful influence when a matching context is present as recognized by Biadsy [0009]. Regarding independent Claim 14, Claim 14 is a system claim with limitations similar to that of claim 1 and is rejected under the same rationale. As to Claim 2, Baruch in view of Biadsy teaches 2. The method of claim 1, Furthermore, Baruch teaches wherein the trained multi-domain language model comprises a set of base layers, a set of domain general adapter layers, (see Baruch, (3:58-4:5) “(23) A generative language model is a particular type of generative model that generates new text in response to model input. The model input includes a task description, also referred to as a prompt. The task description can include instructions and/or examples of digital content. A task description can be in the form of natural language text, such as a question or a statement, and can include non-text forms of content, such as digital imagery and/or digital audio. In some implementations, an input layer of the generative language model converts the task description to an embedding or a set of embeddings. In other implementations, the embedding or embeddings are generated based on the task description by a pre-processor, and then the embeddings are input to the generative language model.”) and a plurality of sets of domain specific adapter layers, (see Baruch, (5:62-6:4) “(32) Some embodiments configure large language generative AI models to machine-generate “thought starters” based on a minimal amount of user input (e.g., a “seed”). In some embodiments, the seed is not explicitly input by the user but rather derived by an intermediate layer of artificial intelligence (AI) models. For example, the intermediate layer of AI models generates a set of AI-derived signals based on a set of input signals, where the input signals represent the creating user's personal interests, style, and preferences.”) and wherein the method further comprises: selecting, prior to applying the trained multi-domain language model to the inference prompt, a selected set of domain adapter layers from among the set of domain general adapter layers and the plurality of sets of domain specific adapter layers. (see Baruch, (7:65-8:28) “(40) These components of the disclosed thought starter generation system are configured in a way that makes personalized thought starter generation scalable. For example, previous attempts at generating thought starters have not been successful because they were not scalable due to the amount of human labor required to manually engineer the thought starter content. In contrast, the disclosed technologies include an arrangement of AI-based components that includes an intermediary AI layer that feeds output to a prompt generation layer, which supplies the personalized prompts used by the generative AI layer to generate the thought starters. The arrangement is scalable because, for example, the intermediary AI layer can interpret the raw input signals and filter out signals that are not likely to be useful for generating personalized prompts. Also, the prompt generation layer can generate prompts that instruct the generative AI layer to generate multiple different or alternative thought starters simultaneously. When multiple different thought starters are machine-generated simultaneously for each user, the number of available thought starters scales quickly. These thought starters can be stored in a thought starter library for future use, reuse, or modification and reuse. For example, when a group of thought starters is machine-generated for a particular user, the currently unused thought starters can be stored in a real-time data store or nearline data store, for example, so that they are readily available to be suggested in real time in response to a subsequent online user interaction.”) Regarding Claim 4, Baruch in view of Biadsy teaches 4. The method of claim 1, Furthermore, Baruch teaches, wherein the routing process comprises: blocking, responsive to the output decision comprising a block decision, the query from the primary language model. (see Baruch (16:13-54) “(72) While not specifically shown in FIG. 1, thought starters 126 that are not directly routed from the generative AI subsystem 124 to the content generation assistant 128 are sent to one or more review or filtering mechanisms, such as spam filters or content moderation systems. For instance, one or more filtering mechanisms can be implemented as a component of generative AI subsystem 124. Examples of filters that can be applied to a thought starter 126 include discriminative machine learning models that have been trained to label content items based on a probabilistic or statistical likelihood of the content items containing particular types of content (e.g., spam filters, inappropriate content filters, etc.) and discriminative models that have been trained to score content items based on a mathematical similarity to one or more particular scoring criteria (e.g., relevance filters, ranking models, etc.). …Thus, a generative model can be used as an alternative to a discriminative model or in addition to a discriminative model, in some implementations. For example, by configuring a prompt with instructions to exclude certain words or phrases, a generative language model can be used to filter out, for instance, certain topics that are inappropriate or not”) Regarding Claim 5, Baruch in view of Biadsy teaches 5. The method of claim 1, Furthermore, Baruch teaches wherein the routing process comprises: blocking, responsive to the output decision comprising a block decision, the query from the primary language model, (see Baruch (16:13-54) “(72) While not specifically shown in FIG. 1, thought starters 126 that are not directly routed from the generative AI subsystem 124 to the content generation assistant 128 are sent to one or more review or filtering mechanisms, such as spam filters or content moderation systems. For instance, one or more filtering mechanisms can be implemented as a component of generative AI subsystem 124. Examples of filters that can be applied to a thought starter 126 include discriminative machine learning models that have been trained to label content items based on a probabilistic or statistical likelihood of the content items containing particular types of content (e.g., spam filters, inappropriate content filters, etc.) and discriminative models that have been trained to score content items based on a mathematical similarity to one or more particular scoring criteria (e.g., relevance filters, ranking models, etc.). …Thus, a generative model can be used as an alternative to a discriminative model or in addition to a discriminative model, in some implementations. For example, by configuring a prompt with instructions to exclude certain words or phrases, a generative language model can be used to filter out, for instance, certain topics that are inappropriate or not”) and transmitting an error message to a user device from which the query was received. (See Baruch Figure 3J see also Baruch (33:5-25) “(147) Selection of the auto magic enhance option 393 causes the thought starter generation system to formulate a new or revised prompt to apply one or more enhancements, such as reformatting, rewording, summarizing or expanding, to the draft post 390, to input the new or revised prompt to the GLM, and to receive output generated by the GLM in response to the new or revised prompt. In FIG. 3J, output of the GLM generated by the GLM in response to selection of the auto magic enhance option 393 is shown in user interface 396. The GLM-output auto-enhancements options produced by the GLM include a suggestion 3100 to make the subpart 398 of the draft post 390 more concise. The user interface 396 shows a revised version of the subpart 398 that includes the suggestion 3100 and shows the previous version of the subpart 398 at box 3102. In response to a user selection of the accept mechanism of user interface 396, the application software system transitions to user interface 3104 of FIG. 3K.”) Regarding Claim 6, Baruch in view of Biadsy teaches 6. The computer-implemented method of claim 1, Furthermore, Baruch teaches 6. The method of claim 1, wherein the routing process comprises: transmitting, responsive to the output decision comprising a pass decision, the query to the primary language model. (See Baruch (17:53-67) “(77) If the user creates a new piece of content based on a thought starter 126, the user can cause the new thought starter-based piece of content, e.g., AI-assisted user-generated content 132, to be distributed to other users via the application software system or user network 134. In some implementations, the application software system or user network 134 uses a content distribution service, such as content distribution service 634, described herein with reference to FIG. 6, to determine how to route the user's newly created piece of content through the application software system or user network 134, e.g., to determine whether to place the user's newly created thought starter-based content, e.g., AI-assisted user-generated content 132, in a particular slot of a particular user's news feed or search result set during a particular login session.”) Regarding Claim 7, Baruch in view of Biadsy teaches 7. The method of claim 1, Furthermore, Baruch teaches wherein the routing process comprises: transmitting, responsive to the output decision comprising a pass decision, the query to the primary language model, (See Baruch (17:53-67) “(77) If the user creates a new piece of content based on a thought starter 126, the user can cause the new thought starter-based piece of content, e.g., AI-assisted user-generated content 132, to be distributed to other users via the application software system or user network 134. In some implementations, the application software system or user network 134 uses a content distribution service, such as content distribution service 634, described herein with reference to FIG. 6, to determine how to route the user's newly created piece of content through the application software system or user network 134, e.g., to determine whether to place the user's newly created thought starter-based content, e.g., AI-assisted user-generated content 132, in a particular slot of a particular user's news feed or search result set during a particular login session.”) applying the primary language model to the query to generate a primary language model output, (See Baruch (11:56-12:9) (57) Alternatively or in addition, AI subsystem 108 generates one or more embeddings for a particular user based on input signals 106. Embedding as used herein may refer to or include a numerical representation of input signals 106, such as a vector or matrix, which is computed using, e.g., a mathematical function, algorithm, or machine learning-based model such as a neural network. For example, given a data set that includes a particular user's historical profile data and activity data, AI subsystem 108 can generate and output a user embedding that holistically represents the interests and/or experiences of that particular user contained in the data set. In some implementations, the numerical member embedding may not be directly added as input to the text prompt used to query the generative model, but can be used in a post-processing system, for example to select the best thought starters for a specific user, if thought starter embeddings are generated in the same embedding space as the user embeddings. Alternatively, or in addition, the member embeddings can be used in a pre-processing system, for example to select the best input signals to use in the prompt for a specific user.”) and transmitting the primary language model output to a user device. (see Baruch, (30:43-51) “(138) In response to user selection of magic post improve mechanism 356, the thought starter generation system communicates a new prompt or a revised version of the original prompt to the generative language model; e.g., a second prompt containing an instruction to the generative language model (GLM) to, e.g., “reformat the GLM's previous output to make the content easier to read.” The re-formatted output of the GLM in response to the second prompt is presented in the user interface 360 of FIG. 3F.”) Regarding Claim 8, Baruch in view of Biadsy teaches 8. The method of claim 1, Furthermore, Baruch teaches wherein the routing process comprises: modifying the query to generate a modified query, and transmitting the modified query to the primary language model. (see Baruch, (30:43-51) “(138) In response to user selection of magic post improve mechanism 356, the thought starter generation system communicates a new prompt or a revised version of the original prompt to the generative language model; e.g., a second prompt containing an instruction to the generative language model (GLM) to, e.g., “reformat the GLM's previous output to make the content easier to read.” The re-formatted output of the GLM in response to the second prompt is presented in the user interface 360 of FIG. 3F. User interface 360 also includes a post mechanism 366, similar to post mechanism 358, and magic post improve mechanism 364, similar to magic post improve mechanism 356. Thus, as shown by FIG. 3E and FIG. 3F, the generative language model can be invoked by the thought starter generation system multiple times, e.g., iteratively, in order to refine, reformat, expand, or otherwise modify the previous output of the GLM.”) Regarding Independent Claim 9, Baruch in view of Biadsy teaches 9. The method of claim 1, Furthermore, Baruch teaches wherein applying the server controller to the query to generate the inference prompt comprises: retrieving a general inference prompt, and using the general inference prompt as the query. (see Baruch, 6:15-35) “(34) Embodiments configure generative models to personalize the thought starters to each specific creator based on the holistic representation of the creator, based on raw input signals, including real-time input signals, the AI-derived signals, or a combination of raw input signals and AI-derived signals. For example, the AI-derived signals include derived information, such as scores, labels, and predictive data, which are computed by an AI subsystem based on collections of input signals that relate to the creator's experiences, interests, tone, previously-created content, and interaction history. The AI signals and/or input signals are used to formulate a creator-specific version of a prompt, which is input to the generative model. In response to the creator-specific version of the prompt, the generative model outputs a creator-specific thought starter. In some embodiments, the input signals and/or AI-derived signals include information about the creator's broader ecosystem and knowledge marketplace, such as information about the creator's first-degree connections, followers, subscribers, etc. and/or information about currently trending topics and content items.”) Regarding Claim 10, Baruch in view of Biadsy teaches 10. The method of claim 1, Furthermore, Baruch teaches wherein applying the server controller to the query to generate the inference prompt comprises: retrieving a general inference prompt, selecting a selected domain for the query, (See Baruch (10:30-45) “(51) Alternatively or in addition, input signals 106 include user profile data 104. Examples of user profile data 104 include user experience, interests, areas of expertise, educational history, job titles, skills, job history, etc. User profile data 104 can be obtained by the thought starter generation system 100 by, for example, querying one or more data stores (examiner interprets domains as “profile data”) that store user profile data for the application software system or user network 134. (52) Input signals 106 alternatively or additionally include data extracted from entity graph 110 and/or knowledge graph 112. The entity graph 110 includes entity data arranged according to a connection graph, e.g., a graph of connections and relationships between users of the user connection network and between users and other entities….”) retrieving a domain specific prompt according to the selected domain, combining the general inference prompt and the domain specific prompt into a combined prompt, and using the combined prompt as the inference prompt. (see Baruch, 6:15-35) “(34) Embodiments configure generative models to personalize the thought starters to each specific creator based on the holistic representation of the creator, based on raw input signals, including real-time input signals, the AI-derived signals, or a combination of raw input signals and AI-derived signals. For example, the AI-derived signals include derived information, such as scores, labels, and predictive data, which are computed by an AI subsystem based on collections of input signals that relate to the creator's experiences, interests, tone, previously-created content, and interaction history. The AI signals and/or input signals are used to formulate a creator-specific version of a prompt, which is input to the generative model. In response to the creator-specific version of the prompt, the generative model outputs a creator-specific thought starter. In some embodiments, the input signals and/or AI-derived signals include information about the creator's broader ecosystem and knowledge marketplace, such as information about the creator's first-degree connections, followers, subscribers, etc. and/or information about currently trending topics and content items.”) Regarding Claim 11, Baruch in view of Biadsy teaches 11. The method of claim 1, Furthermore, Biadsy teaches wherein applying the server controller to the query to identify the query domain comprises: applying the query to the trained multi-domain language model, and receiving, as an additional output of the trained multi-domain language model, the query domain. However, Biadsy does teach this limitation (See Biadsy, [0005] A language model may include one or more domain-specific model components corresponding to different domains or types of non-linguistic context data. The language model can also include a baseline model component that can operate independent of non-linguistic context data. The baseline model component and the one or more domain-specific model components can be used together to determine a score for a language sequence using both linguistic and non-linguistic context information.”) (See Biadsy, [0007] Domains can represent various different aspects of non-linguistic context. For example, a domain may represent a location (e.g., being located in a particular country, a particular city, or other location), a user characteristic (e.g., that the user is male or female, the user speaks a particular dialect, etc.), an application running on a device (e.g., a maps application, an email application, etc.), a time (e.g., a particular day, a time of day, a weekend or weekday, etc.), a device status (e.g., in a vehicle, moving or not moving, etc.), or another aspect of non-linguistic context.”) Baruch in view of Biadsy are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Baruch and Biadsy to incorporate applying the server controller to the query to identify the query domain comprises: applying the query to the trained multi-domain language model, and receiving, as an additional output of the trained multi-domain language model, the query domain of Biadsy. This allows domain-specific components to have a meaningful influence when a matching context is present as recognized by Biadsy [0009]. Regarding Claim 12, Baruch in view of Biadsy teaches 12. The method of claim 1, Furthermore, Baruch teaches wherein applying the server controller to the query to identify the query domain comprises: identifying an application identity associated with the query, and assigning the query domain according to the application identity. (see Baruch (60:11-18) (287) In some implementations, the generative model 1206 is pre-trained on a large corpus (e.g., millions of training examples) and can be re-trained or fine-tuned for particular applications or domains. Model trainer 1202 creates training data based on the prompt-feedback pairs 1212 and/or output-feedback pairs 1214 received from feedback processor 1210. The training data created by model trainer 1202, e.g., training prompt-output pairs 1204, is used to train or fine tune the generative model 1206 using, for example, supervised machine learning or semi-supervised machine learning. An instance of training data includes ground-truth data for a given prompt-output pair, where the ground-truth data includes, for example, a reward score, a classification, or a label generated by feedback processor 1210 in communication with one or more feedback subsystems such as pre-distribution feedback subsystem 1218 or post-distribution feedback subsystem 1228. In a training or fine tuning mode, the generative model 1206 is applied to the training prompt-output pairs 1204 and one or more model parameters of the generative model 1206 are updated based on the training or fine tuning. Alternatively or in addition, the architecture of the generative model 1206 can be re-engineered based on new instances of training data or based on a new application or domain. In an operational mode, the generative model 1206 generates output in response to prompts. The prompt-output pairs 1208 generated by the generative model 1206 are processed by feedback processor 1210 to create prompt-feedback pairs 1212 and/or output-feedback pairs 1214 when the feedback processor 1210 receives feedback related to the respective prompt-output pairs 1208.”) Regarding Claim 15, Baruch in view of Biadsy teaches 15. The system of claim 14, Furthermore, Baruch teaches (see Baruch, (30:43-51) “(138) In response to user selection of magic post improve mechanism 356, the thought starter generation system communicates a new prompt or a revised version of the original prompt to the generative language model; e.g., a second prompt containing an instruction to the generative language model (GLM) to, e.g., “reformat the GLM's previous output to make the content easier to read.” The re-formatted output of the GLM in response to the second prompt is presented in the user interface 360 of FIG. 3F. User interface 360 also includes a post mechanism 366, similar to post mechanism 358, and magic post improve mechanism 364, similar to magic post improve mechanism 356. Thus, as shown by FIG. 3E and FIG. 3F, the generative language model can be invoked by the thought starter generation system multiple times, e.g., iteratively, in order to refine, reformat, expand, or otherwise modify the previous output of the GLM.”) As to Claim 16, Claim 16 is a System Claim with limitations similar to that of claim 2 and is rejected under the same rationale. As to Claim 18, Claim 18 is a System Claim with limitations similar to that of claim 4 and is rejected under the same rationale. As to Claim 19, Claim 19 is a System Claim with limitations similar to that of claim 7 and is rejected under the same rationale. Allowable Subject Matter Independent claim 20 is rejected under 35 USC § 101, but would be allowable if rewritten to overcome the rejection under 35 USC § 101. Dependent claims 3, 13 and 17 are rejected under 35 USC § 101, and are further rejected as being dependent upon rejected base claims 1 and 14 under 35 USC § 103, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and if rewritten to overcome the rejections under 35 USC § 101. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRISTEN MICHELLE MASTERS whose telephone number is (703)756-1274. The examiner can normally be reached M-F 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KRISTEN MICHELLE MASTERS/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Jul 18, 2024
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592219
Hearing Device User Communicating With a Wireless Communication Device
2y 5m to grant Granted Mar 31, 2026
Patent 12548569
METHOD AND SYSTEM OF DETECTING AND IMPROVING REAL-TIME MISPRONUNCIATION OF WORDS
2y 5m to grant Granted Feb 10, 2026
Patent 12548564
SYSTEM AND METHOD FOR CONTROLLING A PLURALITY OF DEVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547894
ENTROPY-BASED ANTI-MODELING FOR MACHINE LEARNING APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12547840
MULTI-STAGE PROCESSING FOR LARGE LANGUAGE MODEL TO ANSWER MATH QUESTIONS MORE ACCURATELY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
87%
With Interview (+24.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month