DETAILED ACTION
This Office action is in reply to correspondence filed 5 March 2026 in regard to application no. 18/828,537. Claims 1-27 are pending and are considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The difficulty is in this phrase: determining [a] vector in multi-dimensional space… “by performing sentiment analysis and tone analysis on the transcribed text or text-based language input”. There are at least three ways to interpret the quoted language, depending on how one mentally punctuates the phrase, and they differ in scope. First, it could be read as “by performing sentiment analysis, and tone analysis on [either of] the transcribed text or text-based language input”. This requires performing sentiment analysis on any data at all, and tone analysis on either the transcribed text or the text-based input.
Second, it could be read as “by performing [both] sentiment analysis and tone analysis on [either of] the transcribed text or text-based language input”. This requires performing both sentiment analysis and tone analysis on either of the transcribed text or the text-based input.
Third, it could be read as “by performing [both] sentiment analysis and tone analysis on the transcribed text, or text-based language input”, which requires either performing both sentiment analysis and tone analysis on the transcribed text, or simply using the text-based input in some other way.
The Examiner suspects that the first of the three interpretations was intended and will examine the claims as such.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims lie within statutory categories of invention, as each is directed to a method (process), system (machine) or non-transitory computer readable medium (manufacture).
The claim(s) recite(s) receiving input from a user, interpreting it including converting speech to text, generating responses to the input based on parameters, looking through a list of available agents to identify those which offer a particular product or service, and transferring the user to such an agent. The interpretation includes determining a pair of numbers (which is all that "at least one vector in multi-dimensional space" requires) by determining, so far as the Examiner can tell (see above), something about the sentiment and tone of text; the generating of a response includes asking the user what she might be interested in and making a determination in response thereof, and adjusting parameters in response to the aforementioned pair of numbers; the particulars such as the language output is nonfunctional printed matter with regard to the claimed substrate.
First, determining such things as what products or services might be relevant to a customer and determining what agents can provide such products are a commercial interaction, one of the "certain methods of organizing human activity" deemed abstract. Second, in the absence of computers, these are steps that can be performed in the human mind and with paper records.
A human insurance broker can ask a customer questions and can interpret the responses. Writing down what somewhat has said is a quite routine step in a commercial interaction. The pair of numbers could be responses to questions such as "on a scale of 1 to 5, how likely are you to buy X product or service" or "in how many months will you be purchasing X product or service", etc., or the broker could mentally determine the sentiment and tone of the customer. The list of available, relevant agents could be determined by memory or by consulting paper records and any information could be conveyed to the customer e.g. verbally, such as by providing a telephone number by which the customer could reach the agent, or by sending her down the hall into his office. None of this presents any practical difficulty, and none requires any technology beyond paper and pen.
This judicial exception is not integrated into a practical application because aside from the bare inclusion of a generic computer and nondescript use of AI, discussed below, nothing is done beyond what was set forth above, which does not go beyond generally linking the abstract idea to the technological environment of Al-enabled computers. See MPEP § 2106.05(h).
As the claims only manipulate data pertaining to desirability of products and services, availability of agents and the like, they do not improve the "functioning of a computer" or of "any other technology or technical field". See MPEP § 2106.05(a). They do not apply the abstract idea "with, or by use of a particular machine", MPEP § 2106.05(b), as the below-cited Guidance is clear that a generic computer is not the particular machine envisioned.
They do not effect a "transformation or reduction of a particular article to a different state or thing", MPEP § 2106.05(c). First, such data, being intangible, are not a particular article at all. Second, the claimed manipulation is neither transformative nor reductive; as the courts have pointed out, in the end, data are still data.
They do not apply the abstract idea "in some other meaningful way beyond generally linking [it] to a particular technological environment", MPEP § 2106.05(e), as the lack of technical and algorithmic detail in the claim is so as not to go beyond such a general linkage.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional claim limitations, considered individually and as an ordered combination, are insufficient to elevate an otherwise-ineligible claim.
Taking claims 12 and 21 together, they include a processor, memory and a communication interface; instructions for the processor are stored on a computer- readable medium. These elements are recited at a high degree of generality and the specification is explicit, pg. 11, lines 27-28, that nothing more than "any general purpose" processor is required, such that a generic computer will suffice.
It only performs generic computer functions of nondescriptly manipulating data and sharing data with persons and/or other devices. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. In light of Recentive1, using known AI techniques in a new data environment is not, per se, sufficient to confer eligibility on an otherwise-ineligible claim.
The type of information being manipulated does not impose meaningful limitations or render the idea less abstract. The repeated use of "autonomously" and “automated” in the claims does not affect this analysis, because anything a computer does (aside from receiving input from a user) is to some extent autonomous. The claim elements when considered as an ordered combination, that is, a generic computer performing a sequence of abstract steps while making nondescript use of previously-known AI techniques, does nothing more than when they are analyzed individually. The other independent claims are simply different embodiments but are likewise directed to a generic computer performing, essentially, the same process.
The dependent claims further do not amount to significantly more than the abstract idea: claim 2 simply labels a computer; claims 3, 4 and 16 purport to limit objects or processes outside the scope of the invention. Claims 5-7, 10, 11, 13-15, 18, 20 and 25 are simply further descriptive of the type of information being manipulated. Claims 8, 9, 19 and 23 are simply descriptive of output. Claims 17 and 27 simply require networking; claims 22, 24 and 26 simply recite further, abstract manipulation of data.
The claims are not patent eligible. For further guidance please see MPEP § 2106.03 - 2106.07(c) (formerly referred to as the "2019 Revised Patent Subject Matter Eligibility Guidance", 84 Fed. Reg. 50, 55 (7 January 2019)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 12, 16-23, 26 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (U.S. Publication No. 2018/0189857) in view of Jain (U.S. Publication No. 2011/0313834) further in view of Bostick et al. (U.S. Publication No. 2018/0286429).
In-line citations are to Wu.
With regard to Claim 1:
Wu teaches: An autonomous computer-implemented method [0054; any of various types of computer may be used] for connecting a user to a live agent, the method comprising:
autonomously receiving language input from the user during a communication session, the language input having been received via a user interface; [0016; that the system analyzes current user input reads on having received it; 0059; the input may be received e.g. via a microphone or, 0069, a camera]
autonomously interpreting the language input with artificial intelligence… [0021; machine learning may be used to analyze the input] applying natural language processing to the transcribed text or the text-based language input; [abstract; a “natural language user input” may be inspected and data extracted therefrom; 0033; the input may be in the form of voice or text]
autonomously generating language responses to be provided via the user interface during the communication session, the language responses being based on a set of user-specific parameters… [abstract; recommendations are made based on natural language input and a user profile]
wherein interpreting the language input includes determining at least one vector in multi-dimensional space representing sentiment and tone of the language input from the user interface by performing sentiment analysis and tone analysis on the transcribed text or text-based language input [0018; the user sentiment is identified; user preferences are used; 0027; indications related to “user emotion” are used, which reads on tone analysis; 0023; N-gram language modeling is used along with indicia of emotion]
wherein generating the language responses includes. determining applicability of the one or more products or services to the user based on predefined criteria; [0025; a measure of relevance is determined based on the user input; 0036; the recommendation may be for a user to make a purchase] and
wherein generating the language responses further includes adjusting the user-specific parameters in response to the at least one vector in multi- dimensional space… [0025; user preferences are modified based on past inputs]
Wu does not explicitly teach autonomously scanning a set of currently available human agents to identify a first agent offering the relevant one or more products or services, autonomously initiating a transfer of the user to the first agent during the communication session, or providing user prompting, to be presented via the user interface, for information relating to one or more products or services which are of potential interest to the user for purchase, but it is known in the art. Jain teaches a method for eliciting responses from agents [title] in which a user is queried for information. [Sheet 3, Fig. 3] A set of "representatives of various vendors" is examined to determine a "subset of agents" who may "be capable of providing a response" relevant to a user inquiry. [0028] The selection may be related to a request for product information, [0001] and may culminate in a user purchase. [0004] The user may be connected to the agent and a commission may be provided if certain criteria are met. [0035] Jain and Wu are analogous art as each is directed to electronic means for determining relevance based on user inputs and providing output relevant to user purchasing decisions.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Jain with that of Wu in order to leverage the interests of people, as taught by Jain; [0006] further, it is simply a substitution of one known part for another with predictable results, simply providing information to a user in the manner of Jain rather than, or in addition to, that of Wu; the substitution produces no new and unexpected result.
Wu does not explicitly teach converting speech of the language input into transcribed text using automated speech recognition, but it is known in the art. Bostick teaches a way of determining truthfulness. [title] It may “convert spoken words of speech based messages to text” to facilitate “NLP” processing [0021] using “speech recognition”. [0001] It may then “adjust the sentiment” of terms. [0026] Bostick and Wu are analogous art as each is directed to electronic means for performing NLP.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Bostick with that of Wu in order to improve the accuracy of sentiment tagging of words, as taught by Bostick; [0064] further, it is simply a substitution of one known part for another with predictable results, simply obtaining text in the manner of Bostick rather than that of Wu; the substitution produces no new and unexpected result.
In this and the subsequent claims, various forms of the word "autonomous" are considered but given no patentable weight. In the context of computer applications, the ordinary meaning of "autonomous" is "without user input". Since the applicant uses "autonomously" in a step of explicitly receiving user input, it is unclear what it is supposed to mean. Rather than giving a rejection for indefiniteness, the Examiner instead declines to give the term any patentable weight. The content of information which is merely transmitted or displayed and then not further processed, such as “adjusting language and tone of the language responses to accommodate the user”, consists entirely of nonfunctional printed matter which bears no functional relation to the substrate and so is considered but given no patentable weight; all that is required is to change the content of the output.
With regard to Claim 2:
The method of claim 1, the method being performed at a server. [0034; a server is used]
This claim is not patentably distinct from claim 1. Referring to a computer as a server, without more, is considered mere labeling and given no patentable weight. The reference is provided for the purpose of compact prosecution.
With regard to Claim 3:
The method of claim 1, the method being performed by a device implementing a cognitive virtual assistant. [0032; personalized digital assistants provide recommendations]
This claim is not patentably distinct from claim 1. The structure of a device implementing a method has weight only in so far as the structure modifies the method. Here, no claim makes any use of a cognitive virtual assistant, so the fact that a device includes this imparts neither structure nor functionality to the claimed method and so is considered but given no patentable weight. The reference is provided for the purpose of compact prosecution.
With regard to Claim 4:
The method of claim 1, the method being performed by a plurality of computing devices in communication via a network and implementing a cognitive virtual assistant. [Sheet 1, Fig. 1; 0032 as cited above in regard to claim 3]
As with claim 3 above, that devices implement a cognitive virtual assistant imparts neither structure nor functionality to the claimed method and SO is considered but given no patentable weight.
With regard to Claim 5:
The method of claim 1, wherein interpreting the language input includes converting speech of the language input into text, [Bostick, 0021 as cited above in regard to claim 1] and evaluating meaning and context of the text. [0027, 0032 as cited above in regard to claim 1]
With regard to Claim 6:
The method of claim 1, wherein the language input and the language responses each comprises captured audio. [0059; the information may be in audio form]
With regard to Claim 7:
The method of claim 1, wherein the language input and the language responses each comprises text-based communications. [0027; the information may be in text form]
With regard to Claim 8:
The method of claim 1, wherein adjusting the user-specific parameters includes adjusting the language and tone of the language responses to accommodate the user.
This claim is not patentably distinct from claim 1. First, claim 1, as presently amended, includes this limitation. Second, it consists entirely of nonfunctional printed matter which bears no functional relation to the substrate and so is considered but given no patentable weight.
With regard to Claim 12:
Wu teaches: A system for connecting a user to a live agent, the system comprising:
a computing system including a processor, memory, and a communication device; [0059; "processor"; "interface"; 0056; a "memory" stores "application programs" for the processor to execute]
the computing system operative to communicate with a user interface and one or more interface(s) of provider(s) of one or more products of services; [Sheet 1, Fig. 1]
the computing system comprising instructions that, when executed, cause the computing system to:
receive language input from the user during a communication session, the language input having been received via a user interface; [0016; that the system analyzes current user input reads on having received it; 0059; the input may be received e.g. via a microphone or, 0069, a camera]
interpret the language input with artificial intelligence… [0021; machine learning may be used to analyze the input] applying natural language processing to the transcribed text or the text-based language input; [abstract; a “natural language user input” may be inspected and data extracted therefrom; 0033; the input may be in the form of voice or text]
generate language responses to be provided via the user interface during the communication session, the language responses being based on a set of user-specific parameters… [abstract; recommendations are made based on natural language input and a user profile]
wherein interpreting the language input includes determining at least one vector in multi-dimensional space representing sentiment and tone of the language input from the user interface by performing sentiment analysis and tone analysis on the transcribed text or text-based language input [0018; the user sentiment is identified; user preferences are used; 0027; indications related to “user emotion” are used, which reads on tone analysis; 0023; N-gram language modeling is used along with indicia of emotion]
wherein generating the language responses includes. determining applicability of the one or more products or services to the user based on predefined criteria; [0025; a measure of relevance is determined based on the user input; 0036; the recommendation may be for a user to make a purchase] and
wherein generating the language responses further includes adjusting the user-specific parameters in response to the at least one vector in multi- dimensional space… [0025; user preferences are modified based on past inputs]
Wu does not explicitly teach scan a set of currently available human agents to identify a first agent offering the relevant one or more products or services, initiate a transfer of the user to the first agent during the communication session, or provide user prompting, to be presented via the user interface, for information relating to one or more products or services which are of potential interest to the user for purchase, but it is known in the art. Jain teaches a method for eliciting responses from agents [title] in which a user is queried for information. [Sheet 3, Fig. 3] A set of "representatives of various vendors" is examined to determine a "subset of agents" who may "be capable of providing a response" relevant to a user inquiry. [0028] The selection may be related to a request for product information, [0001] and may culminate in a user purchase. [0004] The user may be connected to the agent and a commission may be provided if certain criteria are met. [0035] Jain and Wu are analogous art as each is directed to electronic means for determining relevance based on user inputs and providing output relevant to user purchasing decisions.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Jain with that of Wu in order to leverage the interests of people, as taught by Jain; [0006] further, it is simply a substitution of one known part for another with predictable results, simply providing information to a user in the manner of Jain rather than, or in addition to, that of Wu; the substitution produces no new and unexpected result.
Wu does not explicitly teach converting speech of the language input into transcribed text using automated speech recognition, but it is known in the art. Bostick teaches a way of determining truthfulness. [title] It may “convert spoken words of speech based messages to text” to facilitate “NLP” processing [0021] using “speech recognition”. [0001] It may then “adjust the sentiment” of terms. [0026] Bostick and Wu are analogous art as each is directed to electronic means for performing NLP.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Bostick with that of Wu in order to improve the accuracy of sentiment tagging of words, as taught by Bostick; [0064] further, it is simply a substitution of one known part for another with predictable results, simply obtaining text in the manner of Bostick rather than that of Wu; the substitution produces no new and unexpected result.
With regard to Claim 16:
The system of claim 12, wherein the computing system comprises a cognitive virtual assistant that includes a natural language parser. [0032; personalized digital assistants provide recommendations; abstract; extracting information from natural language user input reads on a naturel language parser]
With regard to Claim 17:
The system of claim 12, wherein the computing system comprises a plurality of computing devices in communication via a network. [Sheet 1, Fig. 1]
With regard to Claim 18:
The system of claim 12, wherein the computing system is operative to interpret the language input including autonomously evaluating meaning and context of the text. [0027; parts of speech and meaning may be determined; 0032; context may be analyzed]
With regard to Claim 19:
The system of claim 12, wherein adjustment of the user-specific parameters includes adjustment of language and tone of the language responses to accommodate the user. [0018; parameters are adjusted]
This claim is not patentably distinct from claim 1 as it consists entirely of nonfunctional printed matter which bears no functional relation to the substrate and so is considered but given no patentable weight.
With regard to Claim 20:
The system of claim 12, the system further comprising non-transitory computer readable instructions that, when executed by the processor, control the system to adjust parameters of the cognitive virtual assistant based on sentiment analysis, tone analysis, and personality insights. [0018; 0023 as cited above in regard to claim 12; 0022; determining a user intention reads on a personality insight]
With regard to Claim 21:
Wu teaches: At least one non-transitory computer-readable storage medium comprising instructions, that when executed by a computer system, cause the computer system to carry out operations [0059; "processor"; "interface"; 0056; a "memory" stores "application programs" for the processor to execute] for connecting a user to a live agent, the operations comprising:
receiving language input from the user during a communication session, the language input having been received via a user interface; [0016; that the system analyzes current user input reads on having received it; 0059; the input may be received e.g. via a microphone or, 0069, a camera]
interpreting the language input with artificial intelligence… [0021; machine learning may be used to analyze the input] applying natural language processing to the transcribed text or the text-based language input; [abstract; a “natural language user input” may be inspected and data extracted therefrom; 0033; the input may be in the form of voice or text]
generating language responses to be provided via the user interface during the communication session, the language responses being based on a set of user-specific parameters… [abstract; recommendations are made based on natural language input and a user profile]
wherein interpreting the language input includes determining at least one vector in multi-dimensional space representing sentiment and tone of the language input from the user interface by performing sentiment analysis and tone analysis on the transcribed text or text-based language input [0018; the user sentiment is identified; user preferences are used; 0027; indications related to “user emotion” are used, which reads on tone analysis; 0023; N-gram language modeling is used along with indicia of emotion]
wherein generating the language responses includes. determining applicability of the one or more products or services to the user based on predefined criteria; [0025; a measure of relevance is determined based on the user input; 0036; the recommendation may be for a user to make a purchase] and
wherein generating the language responses further includes adjusting the user-specific parameters in response to the at least one vector in multi- dimensional space… [0025; user preferences are modified based on past inputs]
Wu does not explicitly teach scanning a set of currently available human agents to identify a first agent offering the relevant one or more products or services, autonomously initiating a transfer of the user to the first agent during the communication session, or providing user prompting, to be presented via the user interface, for information relating to one or more products or services which are of potential interest to the user for purchase, but it is known in the art. Jain teaches a method for eliciting responses from agents [title] in which a user is queried for information. [Sheet 3, Fig. 3] A set of "representatives of various vendors" is examined to determine a "subset of agents" who may "be capable of providing a response" relevant to a user inquiry. [0028] The selection may be related to a request for product information, [0001] and may culminate in a user purchase. [0004] The user may be connected to the agent and a commission may be provided if certain criteria are met. [0035] Jain and Wu are analogous art as each is directed to electronic means for determining relevance based on user inputs and providing output relevant to user purchasing decisions.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Jain with that of Wu in order to leverage the interests of people, as taught by Jain; [0006] further, it is simply a substitution of one known part for another with predictable results, simply providing information to a user in the manner of Jain rather than, or in addition to, that of Wu; the substitution produces no new and unexpected result.
Wu does not explicitly teach converting speech of the language input into transcribed text using automated speech recognition, but it is known in the art. Bostick teaches a way of determining truthfulness. [title] It may “convert spoken words of speech based messages to text” to facilitate “NLP” processing [0021] using “speech recognition”. [0001] It may then “adjust the sentiment” of terms. [0026] Bostick and Wu are analogous art as each is directed to electronic means for performing NLP.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Bostick with that of Wu in order to improve the accuracy of sentiment tagging of words, as taught by Bostick; [0064] further, it is simply a substitution of one known part for another with predictable results, simply obtaining text in the manner of Bostick rather than that of Wu; the substitution produces no new and unexpected result.
With regard to Claim 22:
The at least one non-transitory computer-readable storage medium of claim 21, wherein the instructions, when executed, cause the computer system to interpret the language input by at least converting speech of the language input into text, [Bostick, 0021 as cited above in regard to claim 21] and evaluating meaning and context of the text. [0027, 0032 as cited above in regard to claim 21]
With regard to Claim 23:
The at least one non-transitory computer-readable storage medium of claim 21, wherein the instructions, when executed, cause the computer system to adjust the user-specific parameters to adjust the language and tone of the language responses to accommodate the user.
This claim is not patentably distinct from claim 1 as it consists entirely of nonfunctional printed matter which bears no functional relation to the substrate and so is considered but given no patentable weight.
With regard to Claim 26:
The at least one non-transitory computer-readable storage medium of claim 21, wherein the instructions, when executed, cause the computer system to evaluate a level of intent or urgency of the user to purchase the one or more products or services based on the at least one vector in multi- dimensional space representing sentiment and tone of the language input. [0031; user intent determines what "valuable product-related keywords" should be used]
With regard to Claim 27:
The at least one non-transitory computer-readable storage medium of claim 21, wherein the instructions, when executed, activate an application programming interface (API) of the computer system to establish a connection with an API of one or more providers of the products or services. [Jain, 0063; APIs are used]
The phrase "to establish a connection with an API of one or more providers of the products or services" consists entirely of intended-use language which is considered but given no patentable weight.
Claim(s) 9-11, 13-15, 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. in view of Jain further in view of Bostick et al. further in view of Bass et al. (U.S. Publication No. 2007/0250769).
Claims 9, 13 and 24 are similar so are analyzed together.
With regard to Claim 9:
The method of claim 1, wherein providing the user prompting includes providing prompting to be presented via the user interface to the user to answer one or more pre-qualifying questions applicable to the products or services and wherein the determination of applicability of the one or more products or services is based on a determination of eligibility of the user for the one or more products or services based on answers to the one or more pre-qualifying questions.
With regard to Claim 13:
The system of claim 12, wherein the instructions, when executed, further cause the computing system to determine eligibility of the user for the one or more products or services based on predefined requirements having been received via the one or more provider interfaces, and wherein the predefined criteria for determination of applicability of the one or more products or services includes the eligibility.
With regard to Claim 24:
The at least one non-transitory computer-readable storage medium of claim 21, wherein the instructions, when executed, cause the computer system to determine eligibility of the user for the one or more products or services based on predefined requirements having been received via the one or more provider interfaces, and wherein the predefined criteria for determination of applicability of the one or more products or services includes the eligibility.
Wu, Jain and Bostick teach the method of claim 1, system of claim 12, and medium of claim 21, but do not explicitly teach this step of determining eligibility, but it is known in the art. Bass teaches an on-line application system [title] that can provide "pre-qualification questions" [0080] which can be used to "determine the eligibility" of a potential applicant. [0081] The applicant may be attempting to qualify to purchase "a particular health insurance product". [0044] Bass and Wu are analogous art as each is directed to electronic means for managing purchase decisions.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Bass with that of Wu, Jain and Bostick in order to ensure regulatory compliance, as taught by Bass; [0002] further, it is simply a substitution of one known part for another with predictable results, simply capturing the information of Bass in place of, or in addition to, that of Wu; the substitution produces no new and unexpected result.
With regard to Claim 10:
The method of claim 1, wherein the one or more products or services include insurance, or benefits products and services. [Bass, as cited above in regard to claim 9]
This claim is not patentably distinct from claim 1 as it consists entirely of nonfunctional, descriptive language, disclosing at most human interpretation of data but which imparts neither structure nor functionality to the claimed method. The reference is provided for the purpose of compact prosecution.
With regard to Claim 11:
The method of claim 1, wherein determining at least one vector in multi- dimensional space representing sentiment and tone of the language input includes determining a level of intent or urgency of the user to purchase the one or more insurance, or benefits products and services. [0031; user intent determines what "valuable product-related keywords" should be used; Bass, as cited above in regard to claim 9; an insurance product may be the desired purchase]
With regard to Claim 14:
The system of claim 12, wherein the one or more products or services include insurance, or benefits products and services. [Bass, as cited above in regard to claim 13]
This claim is not patentably distinct from claim 12 as it consists entirely of nonfunctional, descriptive language, disclosing at most human interpretation of data but which imparts neither structure nor functionality to the claimed system. The reference is provided for the purpose of compact prosecution.
With regard to Claim 15:
The system of claim 12, wherein the determination of the at least one vector in multi-dimensional space representing sentiment and tone of the language input includes determination of a level of intent or urgency of the user to purchase the one or more insurance, or benefits products and services. [0031; user intent determines what "valuable product-related keywords" should be used; Bass, as cited above in regard to claim 13; an insurance product may be the desired purchase]
With regard to Claim 25:
The at least one non-transitory computer-readable storage medium of claim 21, wherein the one or more products or services include insurance, or benefits products and services. [Bass, as cited above in regard to claim 24]
This claim is not patentably distinct from claim 21 as it consists entirely of nonfunctional, descriptive language, disclosing at most human interpretation of data but which imparts neither structure nor functionality to the claimed medium or any computer executing the instructions stored thereupon. The reference is provided for the purpose of compact prosecution.
Response to Arguments
Applicant's arguments filed 5 March 2026 in regard to rejections made under 35 U.S.C. § 101 have been fully considered but they are not persuasive. In regard to prong one, the applicant simply states in conclusory fashion that the Examiner’s statement as to the abstraction recited in the claims “is no longer accurate in view of the amendments” and then states that the “core of the amended claims is a specific multi-stage AI pipeline, not a mental process”.
First, the Examiner has never made any statement as to what the “core” of the invention is, nor is he required to. A claim “recites” what it “sets forth or describes”, and the claims set forth or describe the abstract idea as explained previously and above.
Speech recognition is a ubiquitous human mental faculty, and mere automation of a mental process is not sufficient to render a claim patent eligible. See MPEP § 2106.04(a)(2)(III), explaining that courts do not “distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer”.
The applicant states that under MPEP § 2106.04(a), claims do not recite a mental process “if they include steps that cannot practically be performed in the human mind”, but first, the Examiner cannot locate any such language within that section of the MPEP, and second, the Examiner fails to see how the claimed steps would present any difficulty in performing them mentally.
In regard to prong two, the Examiner does not see how anything in the claims improves a “conversational AI system”, because AI is used only to perform NLP, which is a technology that does not require AI at all, and there is nothing in the claim that even arguably improves AI in any way whatsoever; simply using a known technology for one of its intended purposes is not an improvement to the technology.
In regard to step 2B, the only non-abstract element in any claim is a generic computer making nondescript use of AI. The claims are not patent eligible and the rejection is maintained.
Applicant’s arguments with respect to claim(s) 1-27 in regard to rejections made under 35 U.S.C. § 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. To the extent the arguments focus on language added by amendment, the teaching of Bostick and further citations to and explanations of the prior art previously made of record have been incorporated herein.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT C ANDERSON whose telephone number is (571)270-7442. The examiner can normally be reached M-F 9:00 to 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT C ANDERSON/Primary Examiner, Art Unit 3694
1 Recentive Analytics, Inc. v. Fox Corp. et al., 2025 U.S.P.Q.2d 628,692, F.Supp.3d 438 (Fed. Cir. 2025)