DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This final office action is in response to the arguments/amendments filed 10/23/2025. Claims 1, 2, 7, and 8 have been amended. Claims 3-6 and 9-12 were previously been cancelled. Claims 1, 2, 7, and 8 and currently pending and have been examined below.
Claim Rejections – 35 U.S.C. 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 7, and 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Per step 1 of the eligibility analysis set forth in MPEP § 2106, subsection III, the claims are directed towards a process, machine, or manufacture.
Per step 2A Prong One, independent claim 1 recites specific limitations which fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2) as follows:
performing an emotion analysis of a textual answer in response to the subjective question to compute a valence value indicating a polarity of the textual answer as positive or negative and an arousal value indicating an intensity of the textual answer as high or low;
selecting a graphicon corresponding to the valence value and the arousal value;
wherein when the valence value is positive and the arousal value is high, the graphicon corresponding to a preset high positive is provided, when the valence value is positive and the arousal value is low, the graphicon corresponding to a preset low positive is provided, when the valence value is negative and the arousal value is high, the graphicon corresponding to a preset high negative is provided, or when the valence value is negative and the arousal value is low, the graphicon corresponding to a preset low negative is provided;
displaying the graphicon to the user;
collects user information including a user's residential area, age and gender;
wherein standards for survey results are set through a collection of the user information.
As noted above, these limitations fall within at least one of the groupings of abstract ideas enumerated in the MPEP 2106.04(a)(2). Specifically, these limitations fall within the group Certain Methods of Organizing Human Activity (i.e., advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). That is – the limitations recited above describe managing personal behavior or relationships or interactions between people (i.e., providing a subjective question to a user, receiving an answer in response to the question, and analyzing the answer to provide a graphicon. Examiner notes that per MPEP 2106.04(a)(2)(II) activity between a person and a computer may fall within the certain method of organizing human activity grouping. Moreover, the recited limitations including selecting a graphicon (e.g., selecting an emoji such as a smile face based on an emotional analysis) are steps that can be performed mentally or with pen and paper. For example, a human being can mentally select (or draw on paper) a happy face in response to a positive polarity and positive intensity answer to subjective survey questions. Therefore, the claims also recite a mental process.
Per step 2A Prong 2, the Examiner finds that the judicial exception is not integrated into a practical application. Claim 1 recites the additional limitations of:
displaying a subjective question in a chatbot dialogue interface of the user terminal;
[a textual answer] inputted to the user terminal;
[performing an emotion analysis] using natural language processing [to compute a valence value . . .”];
[selecting a graphicon] from a database table;
a database table to store graphicons and wherein the graphicon is one of a plurality of graphicons comprising emoticons, emojis, stickers and photos mapped to preset human paralinguistic and nonverbal states, wherein the plurality of graphicons is stored in the database table, and wherein the plurality of graphicons are classified by visual, auditory and tactile expression methods based on respective valence values and arousal values;
[displaying the graphicon] to the user terminal for display in the chatbot dialogue interface to enhance the emotional expressiveness of the chatbot; and
the chatbot server [collects user information].
The additional limitations when viewed individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, do not integrate the abstract idea into a practical application because each of the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. Specifically:
With respect to [selecting a graphicon] from a database table; a database table to store graphicons; and wherein the graphicon is one of a plurality of graphicons comprising emoticons, emojis, stickers and photos mapped to preset human paralinguistic and nonverbal states, wherein the plurality of graphicons is stored in the database table, and wherein the plurality of graphicons are classified by visual, auditory and tactile expression methods based on respective valence values and arousal values, Examiner notes that these limitations are recited at a high level of generality and merely recite that various types of graphicons are stored in a table according to various categories (i.e., visual, auditory, tactile) and can be selected from the table. Simply storing this data in a generic database for selection merely generally links the abstract idea to a particular technological environment or utilizes the computer as a tool to perform the abstract idea.
With respect to [performing an emotion analysis] using natural language processing [to compute a valence value . . .”], Examiner respectfully notes that the claimed use of natural language processing is at a high level of generality. Specifically, only paragraphs [0066] and [0086] discuss natural language processing where the paragraphs recite “natural language processing-based emotion analysis may be performed.” At this level of generality, the use of natural language processing is merely recited at the “apply it” level and does not provide a practical application. See Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025) holding that claims “that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Here, Examiner takes the position applying generic “natural language processing” is the mere application of generic machine learning to a new data environment. Because no improvement to the underlying machine learning models is disclosed, this limitation does not integrate the abstract idea into a practical application.
With respect to displaying a subjective question in a chatbot dialogue interface of the user terminal; [a textual answer] inputted to the user terminal; [displaying the graphicon] to the user terminal for display in the chatbot dialogue interface to enhance the emotional expressiveness of the chatbot; and the chatbot server [collects user information], Examiner notes that the chatbot server and user terminal are recited at a high level of generality and merely provide a technological environment to implement the abstract idea. Additionally, paragraph [0030] of the published specification states that “[t]he ‘user terminal’ mentioned below may be implemented as a computer or portable terminal that may connect to a server or other terminal through a network.” Displaying information in an a generic chatbot dialogue interface (e.g., a text interface); receiving input to the user terminal (e.g., receiving text from the user); and displaying a selected graphicon in the interface (e.g., a smile emoji) merely generally links the abstract idea to a particular technological environment or merely utilizes a computer as a tool to perform the abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it); generally linking the use of the judicial exception to a particular technological environment; or insignificant extra-solution activity. The same analysis applies here in 2B, i.e., mere instructions to apply an exception in a particular technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Alice Corp. also establishes that the same analysis should be used for all categories of claims (e.g., product and process claims). Therefore, independent system claim 7 is also ineligible subject matter under 35 U.S.C. 101 for substantially the same reasons as independent method claim 1. The system recited in claim 7 adds nothing of substance to the underlying abstract idea. At best, the components in independent claim 7 (i.e., a user terminal and generic chatbot server communicatively coupled to the suer terminal including a communication module, processor and memory) merely provide an environment to implement the abstract idea.
Dependent claims 2 and 8 are rejected on a similar rational to the claims upon which they depend. Specifically, each of the dependent claims merely further narrows the abstract idea.
Response to Arguments
35 U.S.C. 101
Applicant's arguments, see pages 5-11, filed 3/17/2025 with respect to the rejection(s) of claims 1, 2, 7, and 8 under 35 U.S.C. 101 have been fully considered but are not persuasive.
First, Applicant argues that:
Applicant respectfully submits that currently amended claim 1 is analogous to the claims deemed patent eligible under 35 USC § 101 in Example 23 of the Subject Matter Eligibility Guidance, Examples, and Training provided by the Office. In Example 23, the invention relates to a graphical user interface (GUI). The inventor improves upon conventional GUis on a user terminal by dynamically relocating obscured textual information of an underlying window to become automatically viewable to the user. In a similar manner, the claimed invention relates to chatbot exchanges realized via the GUI of the user terminal. The claimed invention improves upon conventional chatbot exchanges, which mainly rely on textual means, through employing graphicons, which are graphical means of communication such as emoticons, emoji, stickers, GIFs, images, videos, etc. (remarks page 7).
Examiner respectfully disagrees that the present claims are analogous to Example 23. As noted by Applicant, in example 23, the claimed method related to addressing a problem with overlapping windows within a graphical user interface. Specifically, the claim recited dynamically relocating textual information within a window displayed in a graphical user interface based upon a detected overlap condition which a technical improvement to the user interface. In contrast, the present claim recites a generic “chatbot dialogue interface” (e.g., a text based interface to receive text input by the user and display text and graphicons such as emojis). The claim recites selecting a graphicon to display in the interface based on an emotion analysis of text inputted by the customer. Examiner takes the position that displaying text and a selected emoji in an interface in response to user input merely describes a generic user interface displaying text and emojis rather than an improvement to a user interface analogous to the improvement in Example 23.
Second, Applicant argues that the currently recited features cannot be reasonably interpreted to be mental process and/or a method of organizing human activity, under Step 2A, Prong One (remarks page 9).
Examiner respectfully disagrees and replies that the limitations recited above describe managing personal behavior or relationships or interactions between people (i.e., providing a subjective question to a user, receiving an answer in response to the question, and analyzing the answer to provide a graphicon.) Examiner notes that per MPEP 2106.04(a)(2)(II) activity between a person and a computer may fall within the certain method of organizing human activity grouping. Moreover, the recited limitations including selecting a graphicon (e.g., selecting an emoji such as a smile face based on an emotional analysis) are steps that can be performed mentally or with pen and paper. For example, a human being can mentally select (or draw on paper) a happy face in response to a positive polarity and positive intensity answer to subjective survey questions. Therefore, the claims also recite a mental process.
Third, Applicant argues that:
Currently amended claim 1 is directed to technical field of graphical user interface, and more specifically chatbot display technology for effective user interface. For similar reasons that claim 1 of Example 37 was found eligible, currently amended claim 1 provides significantly more than the abstract idea of generating graphicons through (1) performing an emotion analysis of a textual answer inputted to the user terminal in response to the subjective question using a natural language processing to compute a valence value indicating a polarity of the textual answer as positive or negative and an arousal value indicating an intensity of the textual answer as high or low, (2) selecting a graphicon corresponding to the valence value and the arousal value from a database table and (3) displaying the graphicon to the user terminal for display in the chatbot dialogue interface to enhance the emotional expressiveness of the chatbot, in order to solve a problem faced by the conventional chatbot technology (remarks page 10).
The limitations in (1), (2) and (3) above in combination recite the generation and provision of graphicons to the user terminal when user responses are solicited via the chatbot of the user terminal. As explained in the specification, namely in paragraphs 5, 6 and 7, at the time of this invention, the provision of graphicons to the user terminal in order to facilitate more responses via the user terminal was not well understood, routine, conventional activity to those in the GUI field (remarks page 11).
Examiner respectfully disagrees and replies that the broadest reasonable interpretation includes selecting a generic graphicon (e.g., an emoji such as a smiley face to display) based on the emotion analysis of the text inputted by the user. The emotion analysis is recited at a high level of generality as being performed “using a natural language processing.” At this level of generality simply analyzing user text to select an emoji to display in response describes a task that could be performed mentally by a human. The use of generic natural language processing to perform this task without additional technical detail does not solve a technical problem faced by conventional chatbot technology. Only paragraphs [0066] and [0086] of Applicant’s published specification discuss natural language processing where the paragraphs recite “natural language processing-based emotion analysis may be performed.” At this level of generality, the use of natural language processing is merely recited at the “apply it” level and does not provide a practical application. See Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025) holding that claims “that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Here, Examiner takes the position applying generic “natural language processing” is the mere application of generic machine learning to a new data environment. Because no improvement to the underlying machine learning models is disclosed, this limitation does not integrate the abstract idea into a practical application. Moreover, there is no technical improvement to the graphical interface itself because the interface is merely displaying text and selected graphicons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent Application Publication Number 20170060354 (“Luo”) associating emoticons with values representing intensity of feelings
US Patent Application Publication Number 20220398381 (“Sandridge”) teaches using sentiment analysis to determine negative or positive sentiment polarity
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLAN J WOODWORTH, II whose telephone number is (571)272-6904. The examiner can normally be reached Mon-Fri 9:00-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached on (571) 270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALLAN J WOODWORTH, II/Primary Examiner, Art Unit 3622