Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,797

Virtual Agent

Non-Final OA §101§102§112
Filed
May 01, 2024
Examiner
HE, JIALONG
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Healthcare Interactive Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
742 granted / 911 resolved
+19.4% vs TC avg
Strong +33% interview lift
Without
With
+33.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
934
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 911 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant's election with a traverse of Group I (Claims 1-6) in a reply filed on 01/15/2026 is acknowledged. Applicant also added new claims 21-26 and cancelled non-elected claims 7-20. Currently, claims 1-6 and 21-26 are pending. In a Remarks filed on 01/15/2026, applicant generally alleged that the cancelled claims 7-20 would not present a serious burden for examination (Remarks, page 5). Since applicant has cancelled the non-elected claims 7-20, the argument of “no serious burden” regarding to the cancelled claims 7-20 is moot. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/16/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The Manual of Patent Examining Procedure (MPEP) provides detailed rules for determining subject matter eligibility for claims in §2106. Those rules provide a basis for the analysis and finding of ineligibility that follows. MPEP §2106(III) states that examiners should determine whether a claim satisfies the criteria for subject matter eligibility by evaluating the claim in accordance with the flowchart in this section. Claims 1-5 and 21-25 are rejected under 35 U.S.C. 101. The claimed invention is directed to unpatentable subject matter because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The examiner analyzes the instant claims according to a flowchart for subject matter eligibility test for products and processes (MPEP 2106). Eligibility Step 1 (MPEP 2106.03, Statutory category): Claims 1-5 are directed to a system and claims 21-25 are directed to a system. The claims 1-5 and 21-25 fall into one of the four statutory categories of invention (YES branch of step 1). Eligibility Step 2A, Prong One (does a claim recites a judicial exception?) (MPEP 2106.04(a) – (c)): Step 2A is a two-prong inquiry, in which examiners determine in Prong One whether a claim recites a judicial exception, and if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. Together, these prongs represent the first part of the Alice/Mayo test, which determines whether a claim is directed to a judicial exception (See a flowchart in MPEP 2106.04(II)(A)). In the prone one of the two prong inquiry, the above limitations recited in claims are directed to at least one of groups of abstract ideas (MPEP 2106.04(a), “Mathematical concepts”, “Certain methods of organizing human activity”, “Mental Processes”). It should be noted that these groupings are not mutually exclusive, i.e., some claims recite limitations that fall within more than one grouping or sub-grouping (MPEP 2106.04(a)(2)). Although claims 1-5 and 21-25 fall into one of the four statutory categories the patent eligible subject matter, these recite a number of steps of (“receiving …”, “directing …”, “receiving …”, “delivering…” ). These limitations fall into a judicial exception (MPEP 2106.04 (II), “laws of nature”, “natural phenomena” and “abstract idea”). The Supreme Court has explained that the judicial exceptions reflect the Court’s view that abstract ideas, laws of nature, and natural phenomena are "the basic tools of scientific and technological work", and are thus excluded from patentability because "monopolization of those tools through the grant of a patent might tend to impede innovation more than it would tend to promote it." Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980. It should be noted that there are no bright lines between the types of exceptions, and that many of the concepts identified by the courts as exceptions can fall under several exceptions (MPEP 2106.04 (I) and (II)). In light of the disclosure (Spec. [0040], [0044], Fig. 3 and Fig. 7), the disclosed invention is related to a chatbot for generating personalized response to a user’s question. Even though the disclosure describes a chatbot generating a personalized response, the claimed invention can be regarded as a person (e.g., a mom) provides a personalized response to another person (e.g., daughter). For example, independent claims 1 and 21 can be regarded as: receiving user input including a first request for information (A mom receives a question from her daughter); directing a query to at least one knowledge expert resource based on the first request for information (The mom thinks in her mind about how to answer her daughter’s question); receiving a set of data from the at least one knowledge expert resource (The mom recalls some facts from her memory for answer her daughter’s question); and delivering, to the user, a response to the first request for information based on the set of data (The mom answers her daughter question by telling her daughter); wherein delivering the response to the user is performed with a customizable persona (The mom explains the answer patiently in a very soft tone). If the instance independent claims 1 or 21 were patented, a mom would infringe the patent if she answers her daughter’s questions nicely in a soft voice tone (claimed “a customizable persona”). Although claims 1 and 21 generally linking a judicial exception to a particular technology environment (e.g., a processor and a computer readable medium), these limitations related to generic components of a computer are a drafting effort to monopolize the judicial exception. The courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper” to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir.2011). If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75,674. If the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. As explained above, the limitations recited in independent claims could be performed in human mind or with a pen / a piece of paper. The dependent claims 2-5 and 22-25 further recite steps related to providing personalized responses. These claim elements, when considered alone and in combination, are considered to be abstract ideas because they are directed to a mental process. In these situations, the claim is considered to recite a mental process. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures “can be carried out in existing computers long in use, no new machinery being necessary.” The claims therefore recited an abstract idea, despite the fact that the claimed steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504. Eligibility Step 2A, Prong two (integrated into a practical application? MPEP 2106.04(d)). Since the claimed invention falls into a judicial exception according above analysis (YES branch of PRONG ONE in the step 2A), a claim that is directed to a judicial exception must be evaluated to determine whether the claim recite additional elements that integrate the judicial exception into a practical application (MPEP 2106.04(II)(A)(2)). Prong Two asks whether the claim recite additional elements that integrate the judicial exception into a practical application. In Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. Court in Gottschalk v. Benson ‘‘held that simply implementing a mathematical principle on a physical machine, namely a computer was not a patentable application of that principle. Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible. Eligibility Step 2B (Inventive concept / significantly more consideration; MPEP 2106.05). MPEP §2106.05 describes step 2B test to determine whether a claim amounts to significantly more. The second part of the Alice/Mayo test is often referred to as a search for an inventive concept. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014). The Supreme Court has identified a number of considerations as relevant to the evaluation of whether the claimed additional elements amount to an inventive concept (See MPEP §2106.05(I)(A)). It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2B. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility. The Supreme Court has identified a number of considerations as relevant to the evaluation of whether the claimed additional elements amount to an inventive concept. By considering limitations recited in the instant claims, the claims do not improve the functions of a computer, or any other technology or technical field. The claims also do not apply the judicial exception with, or by use of, a particular machine. The claims also do not have effecting a transformation or reduction of a particular article to a different state or thing. The claims fail to include a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application. The recited “processor” / “memory” are well-understood, routine and conventional in the field. Therefore, that recited element does not amount to significantly more than an abstract idea. Please notes simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. The court also found “adding insignificant extra-solution activity to the judicial exception” or “generally linking the use of the judicial exception to a particular technological environment or field of use” is not enough to be qualify as “significantly more” considerations. By reviewing limitations recited in the claims, none of the limitations meet the significantly more considerations. Therefore, claims are directed to unpatentable subject matter and are rejected under 35 U.S.C. 101 (MPEP §2106, flowchart, Step 2B, NO branch). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 and 21-26 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Independent claims 1 and 21 recites “the user”, which has insufficient antecedent basis. Although an antecedent limitation recites “receiving user input”. This limitation refers to an input, not a user. Similarly, dependent claims 4-5 and 24-25 recite “the type of persona”, which has insufficient antecedent basis. Antecedent limitations never mention any type of persona. Dependent claims 2-6 and 22-26 are rejected because these dependent claims include limitations of their corresponding independent claim 1 or 21, respectively. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6 and 21-26 are rejected under 35 U.S.C. §102 (a)(1) as being anticipated by Chandrasekaran et al. (US PG Pub. 2019/0266999, referred to as Chandrasekaran). Chandrasekaran discloses a personal virtual assistant (PVA) that generates personalized responses based on detected user’s mood or emotions (Chandrasekaran, [0059], [0063], Fig. 1). For example, the PVA can provide good news when a user is down and suppress bad news when the time is not right (Chandrasekaran, [0005]). The PVA also uses language, volume or tone to complement user’s moods (Chandrasekaran, [0005-0008]). Regarding claims 1 and 21, Chandrasekaran discloses a virtual agent system and a method of customizing a persona for a virtual agent system (Chandrasekaran, [0063], [0072-0073], Fig. 5, a computer implemented personal virtual assistant system that generates personalized responses based on user’s mood / emotions), comprising: a device processor (Chandrasekaran, [0085], Fig. 7); and a non-transitory computer readable medium having stored thereon instructions, executable by the processor (Chandrasekaran, [0085], Fig. 7), for performing the following steps: receiving user input including a first request for information (Chandrasekaran, [0031-0032], [0079], a virtual personal assistant (VPA) answers user’s various questions); directing a query to at least one knowledge expert resource based on the first request for information (Chandrasekaran, [0012], [0052-0053], Fig. 5, #513, #543); receiving a set of data from the at least one knowledge expert resource (Chandrasekaran, [0079], [0088], the VPA answers user’s questions by searching web or retrieving information from data source); and delivering, to the user, a response to the first request for information based on the set of data (Chandrasekaran, [0030], [0045], [0062], providing personalized responses to user’s questions); wherein delivering the response to the user is performed with a customizable persona (Chandrasekaran, [0008], [0029], Fig. 1, providing personalized responses in voice tone / speed / volume according to detected user’s mood or emotion states). Regarding claims 2 and 22, Chandrasekaran further discloses: the persona may be customized to be one or more of the following: analytical; soft touch ([0029], [0039-0040], relaxed or happy tone); brief; and detailed ([0005], [0029], presenting augmented responses with visual color and animation). Regarding claims 3 and 23, Chandrasekaran further discloses: the type of persona may be selectable by a user (Chandrasekaran, [0006], user selects language tone or volume). Regarding claims 4 and 24, Chandrasekaran further discloses: the type of persona is auto-selected based on the user input (Chandrasekaran, [0047-0048], [0066], generating personalized responses based on detected user’s mood / emotion from user inputs) OR user information including at least one of user age, user gender, and user occupation (Chandrasekaran, [0007], based on user’s location or time). Regarding claims 5 and 25, Chandrasekaran further discloses: the persona is autogenerated based on the user input (Chandrasekaran, [0047-0048], [0066], generating personalized responses based on detected user’s mood / emotion based on user inputs). Regarding claims 6 and 26, Chandrasekaran further discloses: the system utilizes two artificial intelligence (AI) bots running simultaneously (Chandrasekaran, [0072-0078], Fig. 5 and Fig. 6, client-side AI models and server-side AI models); wherein one of the two AI bots generates the persona and the other of the two AI bots executes the persona (Chandrasekaran, [0072-0078], Fig. 5 and Fig. 6). Claims 1 and 21 are rejected under 35 U.S.C. §102 (a)(1) as being anticipated by Nager et al. (US PG Pub. 2023/0376328, applicant submitted IDS, cited in a PCT search report marked as “X” category reference, referred to as Nager). After performing extensive searches, the examiner discovered many prior art references that meet the broadly recited limitations. In addition, applicant submitted a reference cited in a PCT search report for a corresponding PCT application (PCT/US2025/026797). As indicated in the submitted PCT search report, all claims were indicated as “No Novelty” over the Nager reference. The examiner presents another anticipation rejection using Nager reference to show broadness of independent claims. Nager discloses a personalized virtual agent that generates personalized responses by using persona based on user’s state inferred from facial expressions, body languages and voice tone (Nager, Abstract, [0018], [0077]). In regard to claims 1 and 21, NAGER teaches a virtual agent system, comprising: a device processor (para [0023], "The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer-readable storage medium (or media) having the computer-readable program instructions thereon for causing a processor to carry out aspects of the present invention."); and a non-transitory computer readable medium having stored thereon instructions (para [0023]), executable by the processor, for performing the following steps: receiving user input including a first request for information (para [0068], "Referring to the drawings, FIGS. 5-6 depict approaches to personalizing user interactions with virtual agents 603 of a user interface 525, that can be executed using one or more computer systems 100 operating within a computing environment 500, 600 and variations thereof. The approaches described herein implement systems, methods and computer program products to personalize user interface 525 interactions between a user 601 operating a user device 305 and a virtual agent 603 using artificial intelligence and/or machine learning to communicate with the user 601 via the user interface 525. Embodiments of computing environments 500, 600 may include one or more computer systems 100 interconnected via a computer network 250. The computer systems 100 connected to the computer network 250 may be specialized systems or devices that may include, but are not limited to, the interconnection of personalized interaction system 501 as shown in FIG. 5-6 (and variations thereof), IoT device(s) 529, user device(s) 305 and/or one or more data sources, including but not limited to, internal data 519, historical data 521, real-time data 523, IoT data 531 and external data source(s) 527."); directing a query to at least one knowledge expert resource based on the first request for information (para [0100], "Referring back to step 719, if upon analyzing the selected behavior or action to determine whether or not the repeat behavior is above a threshold level of a positive reaction based on previous interactions between the user 601 and the virtual agent 603 and/or a threshold period of time has passes since the repeat behavior has passed since the virtual agent has presented the repeat behavior or action to the user, then the algorithm 700 may proceed from step 719 to step 723. During step 723, the Al/machine learning engine 509 may further consider whether to integrate into the selected behavior or action a real-time or near real time event into a conversational workflow. Embodiments of the Al/machine learning engine 509 can query real-time data 523 and/or external data source(s) 527 for ongoing real-time events and/or real-time events that may have recently concluded. The Al/machine learning engine 509 can rank the events returned by the query based on the correlation of event features to the preferences and known insights about the user 601 .. . . "); receiving a set of data from the at least one knowledge expert resource (para [0101], "In step 731, the user interface 525 displays the generated response from either step 727 (with reference to real-time event(s)) or step 729 (which lacks a reference to a real-time event) in response to the user interaction. The interaction provided by the virtual agent 603 using the selected persona to present the selected behavior or action and may include text, video, audio, images, and/or other UI elements and combinations thereof to interact with the user. In step 733, as the user 601 interacts with the persona of the virtual agent 603 using the selected behaviors and actions presented by the virtual agent 603, the user feedback module 517 can collect feedback about the user 601 in response to the virtual agent 603 .... "); and delivering, to the user, a response to the first request for information based on the set of data (paras [0100], [0101]); wherein delivering the response to the user is performed with a customizable persona (para [0070], "As shown in the exemplary embodiment of computing environment 500, the computing environment 500 may include one or more systems, components, and devices connected to the network 250, including one or more user devices 305, IoT devices 529, data sources 527 and a personalized interaction system 501 for customizing virtual agents operating as part of an application or services accessible to the user device 305 through a user interface 525. As part of the computing environments 500, 600, personalized interaction system 501 may perform functions, processes and tasked associated with selecting and applying one or more personas to virtual agents 603 of the user interface 525, predictively modify behaviors and/or actions of the virtual agents 603 during interactions with the user, integrate real-time events into conversational workflows and provide content such as audio, video, text and/or images into the conversational workflows, improving engagement with users and prevent premature termination of a virtual agent session by the user 601."). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The examiner discovered several relevant prior art references that are related to one or more concepts disclosed by the instant application. These references are included in the attached PTO-892 form for completeness of the record. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jialong He, whose telephone number is (571) 270-5359. The examiner can normally be reached on Monday – Friday, 8:00AM – 4:30PM, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Pierre Desir can be reached on (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIALONG HE/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597426
METHOD AND SYSTEM FOR GENERATING SYMPATHETIC BACK-CHANNEL SIGNAL
2y 5m to grant Granted Apr 07, 2026
Patent 12579721
Generating video content from user input data
2y 5m to grant Granted Mar 17, 2026
Patent 12581165
SYSTEM AND METHOD FOR AUDIO VISUAL CONTENT CREATION AND PUBLISHING WITHIN A CONTROLLED ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12573360
AUDIOVISUAL CONTENT RENDERING WITH DISPLAY ANIMATION SUGGESTIVE OF GEOLOCATION AT WHICH CONTENT WAS PREVIOUSLY RENDERED
2y 5m to grant Granted Mar 10, 2026
Patent 12561535
ELECTRONIC APPARATUS FOR REAL-TIME CONVERSATION INTERPRETATION AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+33.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 911 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month