Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Application
This non-final office action is in response to the communication filed on 4/8/2025. Claims 1-20 are currently pending and have been examined below.
Claim Objections
Claims 12-19 are objected to because of the following informalities: “one or more processors” should be replaced by “the one or more processors” in all instances beginning in line 4 of claim 12 through claim 19. Appropriate correction is required.
Claim Rejections – 35 U.S.C. 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Per step 1 of the eligibility analysis set forth in MPEP § 2106, subsection III, the claims are directed towards a process, machine, or manufacture.
Per step 2A Prong One, Claim 12 recites specific limitations which fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2) as follows:
identifying a request to execute an action associated with a first account identifier of a client;
selecting a prompt that corresponds to the action, the prompt structured as text including one or more fields, the prompt identifying a list of compatible actions corresponding to at least one of the client or the first account identifier;
embedding content of the first account identifier into one or more of the fields of the prompt the content including at least a portion of the text or a least a portion of a metadata;
obtaining a response to the prompt that indicates a recommended action and a second account identifier associated with the recommended action;
validating that the recommended action corresponds to at least one of the compatible actions, and the second account identifier corresponds to the first account identifier; and
executing the recommended action for the first account identifier in response to the validation of the recommended action and the second account identifier.
As noted above, these limitations fall within at least one of the groupings of abstract ideas enumerated in the MPEP 2106.04(a)(2). Specifically, these limitations fall within the group Certain Methods of Organizing Human Activity (i.e., fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). That is – the limitations recite recommending actions compatible with users accounts based on user requests. For example, paragraph [0065] of Applicant’s specification recites a number of human resource related actions where “[t]he list of compatible actions can include promoting, terminating, hiring, searching, updating, approving payroll, adjusting salary, compensation, activity log, teams, delegated approval, career profile, additional information, documents, accommodations, issuing bonus, approving timesheets, request time off, schedule shifts, conduct performance review, assign training, set goals, enroll in benefits, update benefits, review benefits usage, viewing organization information or a combination thereof.” Recommending actions compatible with user accounts (e.g., recommending promoting an employee or scheduling employee shifts) qualifies as both business relations and managing personal behavior and therefore falls within the certain methods of organizing human activities group of abstract ideas. Additionally, the limitations also fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Specifically, a human being can mentally (or with pen and paper) identify a request to execute an action associated with a first account identifier of a client; selecting a prompt that corresponds to the action; embedding content of the first account identifier into the prompt; obtaining a response to the prompt that indicates a recommended action and a second account identifier associated with the recommended action; validating that the recommended action corresponds to at least one of the compatible actions, and the second account identifier corresponds to the first account identifier; and executing the recommended action for the first account identifier in response to the validation of the recommended action and the second account identifier. Accordingly claim 12 recites an abstract idea.
Per step 2A Prong 2, the Examiner finds that the judicial exception is not integrated into a practical application. Claim 12 recites the additional limitations of:
one or more processors [to perform the steps of the method];
a client system;
provide, to a model trained with machine learning, the prompt embedded with the content;
[obtain] from the model [a response to the prompt].
The additional limitations when viewed individually and when viewed as an ordered combination with the abstract limitations, and pursuant to the broadest reasonable interpretation, do not integrate the abstract idea into a practical application because each of the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. Specifically:
The one or more processors [to perform the steps of the method] are recited at a high level of generality and merely generally link the abstract idea to a particular technological environment or merely utilize a generic computer as a tool to perform the abstract idea. Further, the client system is recited at a high level of generality and is not positively recited. The broadest reasonable interpretation of the claim merely requires that the account identifier is of a client system. At most, the client system only generally links the abstract idea to a particular technological environment (i.e. a generic computer of the client).
With respect to the limitations provide, to a model trained with machine learning, the prompt embedded with the content and [obtain] from the model [a response to the prompt], Examiner notes that these limitations are recited at a high level of generality. Applicant’s specification paragraph [0005] recites “a model trained with machine learning (e.g., a large language model).” Further, paragraph [0054] recites “The model can include one or more of: neural networks, decision-making models, linear regression models, natural language models, random forests, classification models, generative artificial intelligence models, reinforcement learning models, clustering models, neighbor models, decision trees, probabilistic models, classifier models, any other type and form of models, or a combination thereof.” Claim 12 does not specify what type of model is used or how a particular model is trained beyond specifying at a high level that the model is “trained with machine learning.” The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). At this level of generality, the recitation of claim limitations that attempt to cover any solution to an identified problem (i.e. training a model to indicate a recommended action in response to a prompt) merely generally links the abstract idea to a technical field/environment, namely a generic computing environment applying generic machine learning. Further, Examiner notes that Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025) recently held that claims “that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Here, Examiner takes the position that utilizing a generic machine learning to train a model to indicate a recommended action in response to a prompt is the mere application of generic machine learning to a new data environment. Because no improvement to the underlying machine learning models is disclosed, this limitation does not integrate the abstract idea into a practical application.
Under step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at a high level of generality and only generally link the use of the judicial exception to a particular technological environment. Thus, the same analysis applies here in 2B, i.e., mere instructions to apply an exception is a particular technological environment cannot provide an inventive concept.
Alice Corp. also establishes that the same analysis should be used for all categories of claims (e.g., product and process claims). Therefore, independent system claim 1 and independent non-transitory computer system claim 20 are also rejected as ineligible subject matter under 35 U.S.C. 101 for substantially the same reasons as independent method claim 12. The additional limitations in claim 1 (i.e., one or more processors coupled with memory) and the additional limitations of claim 20 (i.e., a non-transitory computer readable medium and processors) add nothing of substance to the underlying abstract idea. The components are merely providing a particular technological environment to implement the abstract idea.
Dependent claims 2-11 and 13-19 are rejected on a similar rational to the claims upon which they depend. Specifically, each of the dependent claims merely further narrows the abstract idea or generally links the abstract idea to a particular technological environment.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20250005260 (“Mansour”) in view of US Patent Application Publication Number 20240013217 (“Schoen”).
Claims 1, 12, and 20
As per claims 1, 12 and 20, Mansour teaches one or more non-transitory computer readable media comprising instructions which, when executed by one or more hardware processors, causes performance of operations, a method, and a system comprising:
one or more processors, coupled with memory to ([0177] “memory and a processor.”);
identify a request to execute an action associated with a first account identifier of a client system ([0100] “the user making the initial input and request.” And, [0105] “and append significant contextual information describing a user and/or users associated with the user submitting a particular request.” And, [0312] “information that may be used to generate the prompt include user accounts.” And, [0192] “one or more fields that can be populated with data or information before being provided as input to an LLM . . . the token/field/variable ${user} can be replaced with a user identifier corresponding to the user currently operating a client device.”);
select a prompt that corresponds to the action, the prompt structured as text including one or more fields, the prompt identifying a list of compatible actions corresponding to at least one of the client system or the first account identifier ([0183] “to supplement the user input, select a prompt from a database . . . based on the user input, insert the user input into a template prompt.” And, [0301] “creates a prompt including predefined query prompt text having an action-request instruction set.” And, [0065] “insert the user's raw input into a template prompt selected from a set of prompts.” And, [0325] “the predetermined prompt text includes a list of permitted commands.” And, [0255] “actions include . . . a summarize content action, find action item action . . . content length modification action . . . summarize topics, identify action items or tasks, identify decisions, or perform other actions.” And, [0192] “one or more fields that can be populated with data” including “a user identifier corresponding to the user currently operating a client device.”);
embed content of the first account identifier into one or more of the fields of the prompt, the content including at least a portion of the text or a least a portion of a metadata ([0224] “the request/prompt can be passed to a preconditioning and hydration service configured to populate request-contextualizing data (e.g., user ID.” And, [0373] “the prompt also includes context data including data corresponding to a role of a user, historical system usage, or data determined from a user profile.” [0228] “populate template fields, add context identifiers.” And, [0098] “the user input prompt may be a text string.”);
provide, to a model trained with machine learning, the prompt embedded with the content ((0192] “’templatized prompts’ that are engineered with one or more fields that can be populated with data or information before being provided as input to an LLM.” And, [0072] “an LLM trained by a suitable training dataset.” And, [0067] “the input prompt provided to the LLM.” And, [0373] “the prompt also includes context data including data corresponding to a role of a user, historical system usage, or data determined from a user profile.”);
obtain, from the model, a response to the prompt that indicates a recommended action and a second account identifier associated with the recommended action ([0067] “the input prompt provided to the LLM.” [0248] “a generative output engine in order to provide suggested content or modifications that can be implemented directly.” And, [0232] “a generative output may include suggestions to be shown to a user.” And [0121] “generate suggestions for cross-links to other documents or platforms; generate suggestions for adding detail or improving conciseness for particular document sections.” And, [0292] “predefined query prompt text may be formulated to cause the generative output engine to identify action items or task summaries in a portion of provided content.” And, [0192] “one or more fields that can be populated with data” including “a user identifier corresponding to the user currently operating a client device.” Examiner notes that Applicant’s specification paragraph [0077] states that “the second account identifier can be, correspond to, or include the first account identifier.” Therefore Examiner interprets the account/user identifier associated with the recommended action as both the first and second account identifier.);
execute the recommended action for the first account identifier ([0226] “the output router may execute API requests generated by the generative output engine.” And, [0248] “output engine in order to provide suggested content or modifications that can be implemented.” And, [0320] “a generative output engine can also be used to perform tasks.” And, [0333] “generated using a generative output engine and used to execute searches in the issue tracking platform.”).
Mansour does not explicitly teach but Schoen teaches:
validate that the recommended action corresponds to at least one of the compatible actions, and the second account identifier corresponds to the first account identifier ([0052] “the verification service may generate a recommendation to approve the application and send the recommendation to the second user.” And, [0050] “determines a recommendation for the application based at least in part on the score generated by the scoring component . . . compare the score . . . to one or more thresholds . . . if the score is less than a first threshold but equal to or greater than a second threshold, the application decision component may generate a recommendation to approve the application.” And, [0092] “the verification service may determine one or more threshold scores by which to approve, deny, or provide a recommendation.” Examiner interprets determining that the score exceeds a threshold required for recommending an approval as validating that the recommended action corresponds to at least one of the compatible actions. And, [0019] “verification service may determine that a name of the user is consistent between the identification information received from the user and the financial information received.” And, claim 4 recites “verifying the identity of the second user includes determining that a first name associated with the identification information is a same name as a second name associated with the financial information.” Examiner notes that Applicant’s specification paragraph [0077] states that “the second account identifier can be, correspond to, or include the first account identifier” and paragraph [0084] which reciters “the second account identifier can include a first name, a last name.” Therefore, Examiner interprets verifying that the first name is the same as the second name as validating that the second account identifier corresponds to the fist account identifier).
Mansour discloses executing the recommended action for the first account identifier but does not explicitly teach doing so in response to the validation of the recommended action and the second account identifier as taught by Schoen ([0022] “if the score is equal to or greater than a first threshold, the verification service may automatically approve the user for an application.” And, [0092] “the verification service may determine one or more threshold scores by which to approve, deny, or provide a recommendation.” And, [0019] “verification service may determine that a name of the user is consistent between the identification information received from the user and the financial information received.” And, claim 4 recites “verifying the identity of the second user includes determining that a first name associated with the identification information is a same name as a second name associated with the financial information.”).
Therefore, it would have been obvious to modify Mansour to include validate that the recommended action corresponds to at least one of the compatible actions, and the second account identifier corresponds to the first account identifier and execute the recommended action in response to the validation of the recommended action and the second account identifier as taught by Schoen in order to ensure “an identity of the first user is accurate, up to date, or otherwise accurately identifies the first user” (Schoen [0087]).
Claims 3 and 14
As per claims 3 and 14, Mansour further teaches:
obtain, from the model, an intent of the prompt ([0359] “determine an intent metric for a given natural language user input.” And, [0100] “the user making the initial input and request.” And, [0312] “information that may be used to generate the prompt include user accounts.”)
select, based on the intent, an index from a plurality of indexes ([0359] “The intent metric may be determined using a semantic analysis of the user input and may indicate a conformity or a correlation of a natural language user input” and “classifying feature set or one or more exemplar requests to determine the intent metric or score.” And, [0360] “the intent metric or other content metric satisfying a first criteria.”);
the index to the model to cause the model to generate a response that includes the index instead of the intent ([0377] “then use the initial response or answer overview as an input to a second model or engine.” And, [0370] “The completeness score may be determined using a trained model, trained on previous user requests and resulting exchanges including the number of follow-up questions required, resolution of the issue, and/or feedback received from a previous user.”)
Claims 4 and 15
As per claims 4 and 15, Mansour further teaches:
obtain, from the model, a response to the prompt that indicates the recommended action, the second account identifier associated with the recommended action and the index ([0067] “the input prompt provided to the LLM.” [0248] “a generative output engine in order to provide suggested content or modifications that can be implemented directly.” And, [0232] “a generative output may include suggestions to be shown to a user.” And [0121] “generate suggestions for cross-links to other documents or platforms; generate suggestions for adding detail or improving conciseness for particular document sections.” And, [0292] “predefined query prompt text may be formulated to cause the generative output engine to identify action items or task summaries in a portion of provided content.” And, [0192] “one or more fields that can be populated with data” including “a user identifier corresponding to the user currently operating a client device.”).
Claims 5 and 16
As per claims 5 and 16, Mansour further teaches:
identify a network security parameter of the client system ([0223] “security gateway may be configured to determine whether the request itself conforms to one or more policies or rules (data and/or executable representations of which may be stored in a database).” And, [0228] “security gateway of the prompt management service that may be configured to determine whether the user input is authorized to execute and/or complies with organization-specific rules.”);
compare the network security parameter with the first account identifier and the action ([0223] “security gateway may be configured to determine whether the request itself conforms to one or more policies or rules (data and/or executable representations of which may be stored in a database).” And, [0228] “security gateway of the prompt management service that may be configured to determine whether the user input is authorized to execute and/or complies with organization-specific rules.”);
determine, based on the comparison, the action is authorized ([0223] “security gateway may be configured to determine whether the request itself conforms to one or more policies or rules (data and/or executable representations of which may be stored in a database).” And, [0228] “security gateway of the prompt management service that may be configured to determine whether the user input is authorized to execute and/or complies with organization-specific rules.”).
Claim 6
As per claims 6, Mansour further teaches:
train the model with data from the client system ([0072] “an LLM trained by a suitable training dataset.” And, [0189] “submitting that prompt as input to a trained large language model.”).
Claims 7 and 17
As per claims 7 and 17, Mansour further teaches:
identify, using the model, one or more data points associated with the action ([0067] “the input prompt provided to the LLM.” [0248] “a generative output engine in order to provide suggested content or modifications that can be implemented directly.” And, [0232] “a generative output may include suggestions to be shown to a user.” And [0121] “generate suggestions for cross-links to other documents or platforms; generate suggestions for adding detail or improving conciseness for particular document sections.” And, [0292] “predefined query prompt text may be formulated to cause the generative output engine to identify action items or task summaries in a portion of provided content.” And, [0192] “one or more fields that can be populated with data” including “a user identifier corresponding to the user currently operating a client device.”).
Mansour does not explicitly teach but Schoen teaches:
validate that the one or more data points corresponds to the action ([0052] “the verification service may generate a recommendation to approve the application and send the recommendation to the second user.” And, [0050] “determines a recommendation for the application based at least in part on the score generated by the scoring component . . . compare the score . . . to one or more thresholds . . . if the score is less than a first threshold but equal to or greater than a second threshold, the application decision component may generate a recommendation to approve the application.” And, [0092] “the verification service may determine one or more threshold scores by which to approve, deny, or provide a recommendation.”).
Mansour does not explicitly teach but Schoen teaches:
execute, using at least one of the one or more data points, the recommended action for the
first account identifier in response to the validation of the recommended action and the second
account identifier ([0022] “if the score is equal to or greater than a first threshold, the verification service may automatically approve the user for an application.” And, [0092] “the verification service may determine one or more threshold scores by which to approve, deny, or provide a recommendation.” And, [0019] “verification service may determine that a name of the user is consistent between the identification information received from the user and the financial information received.” And, claim 4 recites “verifying the identity of the second user includes determining that a first name associated with the identification information is a same name as a second name associated with the financial information.”).
Therefore, it would have been obvious to modify the combination of Mansour and Schoen to include validate that the one or more data points corresponds to the action; and execute, using at least one of the one or more data points, the recommended action for the first account identifier in response to the validation of the recommended action and the second account identifier as taught by Schoen in order to ensure “an identity of the first user is accurate, up to date, or otherwise accurately identifies the first user” (Schoen [0087]).
Claims 8 and 18
As per claims 8 and 18, Mansour further teaches:
provide, to the model trained with machine learning, the prompt including the content and one or more historical responses to prompts ([0072] “an LLM trained by a suitable training dataset.” And, [0189] “submitting that prompt as input to a trained large language model.” And, [0370] “analyze the natural language user input and determine a degree of correlation to an example query or historical set of inquires.” And, [0379] “transformer model that is trained using a set of historical question-answer pairs” and “[b]y training the transformer model using historical question-answer pairs developed by the same or a similar ITSM service, the selected content items are likely to have an improved relevance.”).
obtain, from the model, a response to the prompt that indicates a recommended action and second account identifier associated with the recommended action ([0067] “the input prompt provided to the LLM.” [0248] “a generative output engine in order to provide suggested content or modifications that can be implemented directly.” And, [0232] “a generative output may include suggestions to be shown to a user.” And [0121] “generate suggestions for cross-links to other documents or platforms; generate suggestions for adding detail or improving conciseness for particular document sections.” And, [0292] “predefined query prompt text may be formulated to cause the generative output engine to identify action items or task summaries in a portion of provided content.” And, [0192] “one or more fields that can be populated with data” including “a user identifier corresponding to the user currently operating a client device.”).
Claim 9
As per claim 9, Mansour further teaches:
format the response to the prompt based on the client system ([0059] “The string prompt (or “input prompt” or simply “prompt”) received as input by a generative output engine can be any suitably formatted string of characters.” And, [0094] “request prompt; generative output formatted as a string; and so on. For example, a simple response to the preceding request may be JSON formatted.” And, [0096] “prompt templates can include example input/output format cues or requests.” And, [0258] “the predefined query prompt text may include a request, example formatting or schema examples.”).
Claim 10
As per claim 10, Mansour further teaches:
wherein the first account identifier corresponds to a profile data structure of an individual of an organization associated with the client system ([0224] “the request/prompt can be passed to a preconditioning and hydration service configured to populate request-contextualizing data (e.g., user ID.” And, [0373] “the prompt also includes context data including data corresponding to a role of a user, historical system usage, or data determined from a user profile.” [0228] “populate template fields, add context identifiers.” And, [0098] “the user input prompt may be a text string.” And, [0291] “the predefined query prompt text may be based on a user role or other aspect of the user profile.” And, [0344] “gather context data extracted from a user profile of the requesting user including, user role, user permissions, application usage, and other user profile information.”).
Claim 11
As per claim 11, Mansour further teaches:
wherein the action corresponds to a human resources activity, and the compatible actions correspond to human resource activities supported by a service provider system ([0171] “a user having a role of “human resources professional” may be presented with prompts associated with manipulating or summarizing information presented in a directory system or a benefits system.”).
Claim 19
As per claim 19, Mansour further teaches:
providing, by one or more processors, to the model trained with machine learning, the prompt including the content and one or more historical responses to prompts ([0072] “an LLM trained by a suitable training dataset.” And, [0189] “submitting that prompt as input to a trained large language model.” And, [0370] “analyze the natural language user input and determine a degree of correlation to an example query or historical set of inquires.” And, [0379] “transformer model that is trained using a set of historical question-answer pairs” and “[b]y training the transformer model using historical question-answer pairs developed by the same or a similar ITSM service, the selected content items are likely to have an improved relevance.”).
obtaining, by one or more processors, from the model, a response to the prompt that indicates a recommended action and a second account identifier associated with the recommended action ([0067] “the input prompt provided to the LLM.” [0248] “a generative output engine in order to provide suggested content or modifications that can be implemented directly.” And, [0232] “a generative output may include suggestions to be shown to a user.” And [0121] “generate suggestions for cross-links to other documents or platforms; generate suggestions for adding detail or improving conciseness for particular document sections.” And, [0292] “predefined query prompt text may be formulated to cause the generative output engine to identify action items or task summaries in a portion of provided content.” And, [0192] “one or more fields that can be populated with data” including “a user identifier corresponding to the user currently operating a client device.”).
Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20250005260 (“Mansour”) in view of US Patent Application Publication Number 20240013217 (“Schoen”) as applied to claims 1 and 12, and in further view of US Patent Application Publication Number 20240168928 (“Pfante”).
Claims 2 and 13
As per claims 2 and 13, Mansour does not explicitly teach but Pfante teaches:
construct a vector using the action ([0007] “generate a vector for the first field.” And [0064] “generator may be configured to generate a vector corresponding to the given input string.”);
construct a plurality of vectors using the prompt ([0008] “generating a vector for the first field and one or more vectors for each of the one or more fields and comparing the vector for the first field with each of the one or more vectors.”);
compare the vector constructed using the action with the plurality of vectors constructed using the prompt ([0008] “generating a vector for the first field and one or more vectors for each of the one or more fields and comparing the vector for the first field with each of the one or more vectors.”);
determine, based on the comparison, the list of compatible actions corresponding to the action ([0008] “generating a vector for the first field and one or more vectors for each of the one or more fields and comparing the vector for the first field with each of the one or more vectors. The method may also include identifying a first plurality of clusters based on the first score of similarities.”).
Therefore, it would have been obvious to modify the combination of Mansour and Schoen to include construct a vector using the action; construct a plurality of vectors using the prompt; compare the vector constructed using the action with the plurality of vectors constructed using the prompt; and determine, based on the comparison, the list of compatible actions corresponding to the action as taught by Pfante in order to ensure “an identity of the first user is accurate, up to date, or otherwise accurately identifies the first user” (Schoen [0087]) ensuring that compatible actions are correctly identified.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent Application Publication Number 20220229832 (“Li”) discloses receiving a user request from a user, identifying user query intent and generating an action for the query
US Patent Application Publication Number 20250217769 (“Kumar”) discloses a particular engineered prompt template can be selected based on a desired task for which output of a generative output engine may be useful
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLAN J WOODWORTH, II whose telephone number is (571)272-6904. The examiner can normally be reached Mon-Fri 9:00-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached on (571) 270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALLAN J WOODWORTH, II/Primary Examiner, Art Unit 3622