DETAILED ACTION
A. This action is in response to the following communications: Transmittal of New Application filed 03/13/2024.
B. Claims 1-20 remains pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claim recites “receiving a input during interaction with a virtual assistant…”; “determine a classification…”, “identify a prior user interaction with a virtual assistant…”, “determine a prior resolution based upon prior user interaction…”(which are mere data selections), determine based upon financial data”…, (which is just mere cross reference data gathering).
The limitation “determine, based upon current financial data… threshold…”, generate and provide…” on a generic user interface executed by a generic computer, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “by a processor and by a virtual assistant (which is an off the shelf generic software)” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by a processor” language, “in response to receiving” in the context of this claim encompasses the user manually organizing data.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a processor to perform receiving and viewing steps. The processor in both steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of ranking information based on a determined amount of use) such that it amounts no more than mere instructions to apply the exception using a generic computer component.
Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform both the to perform receiving and viewing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
The claim is not patent eligible.
Claims 2-9,11-17 and 19-20 do not include elements that amount to significantly more than the abstract idea and are also rejected under the same rational
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mars, Jason et al. (US Pub. 2020/0151566 A1), herein referred to as “Mars”.
As for claims 1, 10 and 18, Mars teaches. A system and corresponding method of claim 10 and non-transitory computer-readable storage medium of claim 18 having instructions stored thereon that when executed by at least one processing circuit, cause the at least one processing circuit to perform operations comprising and a processing circuit comprising memory and one or more processors, the processing circuit configured to (par. 21 and 135 implementing an artificial intelligence (AI) virtual assistant platform within hardware environment that consists of processors software and memories):
receive an input from a user device associated with a user during an interaction with a virtual assistant executed by the processing circuit (par. 22 system utilizes a graphical user interface to receive user query and/or user command input in form of text or speech);
determine a classification associated with the input (par. 22 the natural language processing (NLP) components of the system include a competency classification engine to identify competency classification labels for user input data and parse user input into data that is comprehensible which then is converted into program-comprehensible and useable features);
identify a prior user interaction with a virtual assistant associated with the classification (par. 22 using outputs of NLP perform various operations that accesses one or more data sources relevant to the query/command while performing the following data filtering, data aggregation and the like to the data accessed from one or more data sources; par. 24 data sources can be external; par. 26 classification engine 120 classifies user input data (query/command) wherein training input used in training learning algorithm of classification engine may include crowdsourced data obtained from one or more disparate user query and/or command data sources and/or platforms (e.g. messaging platform etc.), it is to be noted that the system can utilize training data from any suitable external data sources wherein the learning algorithm may be continually trained using user quires and commands);
determine a prior resolution based on the prior user interaction, wherein the prior resolution comprises an action available to the user (par. 27 using data derived on processing prior quires data, which consists of one or more prior query per se (e.g. text data or the like), data derived based upon processing the prior query (according to method 200 or the like, query response data, metadata about the query and the like; par. 91-92 storing past queries for identifying supplemental classification labels based upon user input data, performing identification of successive cognate user input to generate response to successive cognate user query; a successive, cognate user query relates to query that is posed by user and relates to the prior query thereby identifying a successful relation between prior query and a successive cognate query based upon classification; this can be refine or redefined based upon prior query posed by user and is an extension or continuation of the prior query where the system can chain or link/associate together a prior query and the successive cognate query; thereby resolutions are used to form seamless and/or consistent responses that relate to queries);
determine, based on current financial data associated with the user and the prior resolution, if the action available to the user satisfies a threshold (par. 96 the trained data used to classify user input data according to a pool of competency classification labels as described in S220; par. 97 an arbitrary threshold can be met with user posed question “how about last year?” in which system uses current financial data “income” to reply to the user in “Successive, Cognate” or “Follow-on” query since a structure of the text or language of the query suggests a positive likelihood or high probability that the query is related to and intended to refine a prior query of the user); and
generate and provide, based on the action available to the user satisfying the threshold, an output via the user device, the output corresponding to the prior resolution (par. 97 the competency classification label of the successive, cognate user query may function to define a universe of functions and operations applicable to the query when generating a response and the supplemental classification label may function to trigger an additional query handling process that involves identifying a prior, related query and updating the response to the prior, related query with slot values derived for the successive, cognate query; this is but one example of an output that can be displayed on the user interface the user is interacting with by inputting queries into the system).
As for claims 2 and 11, Mars teaches. The system of claim 1 and corresponding method of claim 10, wherein the output comprises an interface via a graphical user interface (GUI) of the user device, the interface comprising an actionable item associated with the prior resolution; and receiving, by the processing circuit, a selection of the actionable item from the user device; wherein the processing circuit is further configured to initiate the action available to the user based on a selection of the actionable item (par. 38 The user interface system 105 may include any type of device or combination of devices capable of receiving user input data and presenting a response to the user input data from the artificially intelligent virtual assistant.).
As for claims 3, 12 and 19, Mars teaches. The system of claim 1 and corresponding method of claim 10 and non-transitory computer-readable storage medium of claim 18, wherein the processing circuit is further configured to:
generate a recommended resolution based on the current financial data and the input, wherein the recommended resolution comprises a recommended action available to the user; and generate and provide a recommendation output via the user device, the recommendation output corresponding to the recommended resolution (par. 51 For example, if the artificially intelligent virtual assistant is configured by a system implementing method 200 to be competent in three areas of competency including, Income competency, Balance competency, and Spending competency in the context of a user's banking, then the deep classification machine learning algorithm may generate a classification label for each of Income, Balance, and Spending; par. 80 financial data can be used in the data fetching step to enable the system to response to query related to user finances (e.g. “income”); “Income competency-specific functions may include functions that enable fetching of financial data of the user and operations that enable summation or aggregation of portions of the financial data of the user.”; also note par. 28 and 30).
As for claims 4, 13 and 20, Mars teaches. The system of claim 3 and corresponding method of claim 12 and non-transitory computer-readable storage medium of claim 19, wherein the processing circuit is further configured to: determine prior decision components used to generate the prior resolution; determine current decision components used to generate the recommended resolution; compare at least one of the prior decision components and at least one of the current decision components; and generate and provide a comparison output via the user device, the comparison output corresponding to the comparison of the at least one of the prior decision components and the at least one of the current decision components (par. 91-92 using prior query outcomes to create seamless and or consistent responses to the related queries).
As for claim 5 and 14, Mars teaches. The system of claim 3 and corresponding method of claim 12, wherein the recommendation output comprises an interface via a graphical user interface (GUI) of the user device, the interface comprises an actionable item associated with the recommended resolution; receiving, by the processing circuit, a selection of the actionable item from the user device; and wherein the processing circuit is further configured to initiate the recommended action of the recommended resolution based on a selection of the actionable item (par. 38 The user interface system 105 may include any type of device or combination of devices capable of receiving user input data and presenting a response to the user input data from the artificially intelligent virtual assistant; par. 40 perform slot value identification of the user input data that includes identifying details in the query or command that enables the system to service the query or command. In slot value identification, the system may function to segment or parse the query or command to identify operative terms that trigger one or more actions or operations by the system required for servicing the query or command. Accordingly, the method 200 may initially function to decompose a query or command into intelligent segments and convert each of those segments into machine-useable objects or operations. The method 200 may then function to use the slot value identifications and slot value extractions to generate one or more handlers (e.g., computer-executable tasks) for the user input data that indicate all the computer tasks that should be performed by the artificially intelligent virtual assistant to provide a response to the user query or user command.).
As for claims 6 and 15, Mars teaches. The system of claim 1 and corresponding method of claim 10, wherein the processing circuit is further configured to: determine an interaction frequency associated with interactions between the user and the virtual assistant executed by the processing circuit and being associated with the classification; determine, based on the input and the interaction frequency, an interaction period associated with a future predicted interaction between the user and the virtual assistant executed by the processing circuit associated with the classification; and generate and provide a notification output via the user device, the notification output corresponding to a notification associated with the future predicted interaction (par. 56 and 59 S220 may provide the user input data, either synchronously (i.e., in parallel) or asynchronously, to each of the specific-competency trained deep machine learning algorithms to generate a suggestion and/or prediction of a competency classification label and associated probability of intent match, according to the training of the specific-competency algorithm that matches at least one of the multiple areas of competency for the user input data and predetermined competency threshold may be based on a statistical analysis of historical user input data).
As for claim 7, Mars teaches. The system of claim 6, wherein the notification output comprises an interface via a graphical user interface (GUI) of the user device, the interface comprises an actionable item associated with the prior resolution; and wherein the processing circuit is further configured to initiate the action corresponding to the prior resolution within the interaction period based on a selection of the actionable item (par. 59 and 105 pulling/retrieving historical queries and/or prior query data based upon user input; these prior data would include all resolutions/results of said query).
As for claims 8 and 16, Mars teaches. The system of claim 1 and corresponding method of claim 10, wherein the processing circuit is further configured to: determine prior decision components used to generate the prior resolution; and generate and provide a logic output via the user device, the logic output corresponding to the prior resolution and at least one of the prior decision components (par. 105 using historical queries and prior query data of user to create new outputs as noted in par. 92 as well).
As for claims 9 and 17, Mars teaches. The system of claim 8 and corresponding method of claim 16, wherein the logic output comprises an actionable item associated with the at least one of the prior decision components; and wherein in response to receiving an indication of an adjustment of the actionable item, the processing circuit is further configured to: determine a decision adjustment of the at least one of the prior decision components corresponding to the adjustment of the actionable item; generate, based on the decision adjustment, an updated resolution, wherein the updated resolution comprises an updated action available to the user; generate and provide an updated resolution output via the user device, the updated resolution output corresponding to the updated action of the updated resolution (par. 92 and 97 The successive, cognate user query may typically function to refine or redefine a prior query posed by the user).
(Note :) It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Inquires
Any inquiry concerning this communication should be directed to NICHOLAS AUGUSTINE at telephone number (571)270-1056.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
PNG
media_image1.png
213
559
media_image1.png
Greyscale
/NICHOLAS AUGUSTINE/Primary Examiner, Art Unit 2178 November 12, 2025