DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/25 has been entered.
Status of Claims
Due to communications filed 12/17/25, the following is a non-final office action. Claims 1-20 are pending in this application and are rejected as follows. The previous rejection has been modified to reflect claim amendments.
Claim Rejections - 35 USC §101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title,
Claims 1-20 are rejected under 35 U.S.C, 101 because the claimed invention is directed to a
judicial exception (l.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly
more.
With regard to the present claims 1-20, these claim recites a series of steps and, therefore, is a
process, and ultimately, is statutory.
In addition, the claim recites a judicial exception. The claims as a whole recite a method of
organizing human interactions. The claimed invention is a method that allows for
monitoring participant behavior, evaluating engagement and emotional effectiveness, prompting interaction when engagement is low, using AI to make a decision, and displaying an automated response through a dummy avatar. These are methods of managing interactions between people
The mere nominal recitation of a generic computer/computer network does not take the claim out of the methods of organizing human interactions. Thus, the claim recites an abstract idea.
Furthermore, the claims are not integrated into a practical application. The claim as a whole
merely describes how to generally "apply" the concept of monitoring participant behavior, evaluating engagement and emotional effectiveness, prompting interaction when engagement is low, using AI to make a decision, and displaying an automated response through a dummy avatar in a computer environment. The claimed computer components are recited at a high level of generality and are merely invoked as tools to perform an existing process. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea.
Finally, the claims do not recite an inventive concept. As noted previously, the claim as a whole
merely describes how to generally "apply" the concept of monitoring participant behavior, evaluating engagement and emotional effectiveness, prompting interaction when engagement is low, using AI to make a decision, and displaying an automated response through a dummy avatar in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claim is ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102
and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory
basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of
rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same
under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections
set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is
not identically disclosed as set forth in section 102, if the differences between the claimed invention
and the prior art are such that the claimed invention as a whole would have been obvious before the
effective filing date of the claimed invention to a person having ordinary skill in the art to which the
claimed invention pertains. Patentability shall not be negated by the manner in which the invention
was made.
Claim(s) 1-2, 4-10, 12-16, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over
TAYA (JP 2024067914 A) and further in view of (LE US 20240281052 A1), and further in view
of GIARD (WO 1156828 A1).
As per claim 1, TAYA discloses:
wherein the metaverse collaborative event takes place in a three-dimensional space in a virtual world
and the participants and presenter are visible to each other in the three-dimensional space,
(TAYA (JP 2024067914 A)The virtual space here is a two-dimensional or three-dimensional virtual space configured on a network (on a computer) such as the metaverse. Online conferences include conferences, lectures, classes, etc. that are conducted online using images, such as video conferences and online classes. In this embodiment, virtual spaces and online conferences are collectively referred to as "virtual spaces, etc." Also, virtual places and events set as virtual spaces, etc., and virtual rooms and meeting places for online conferences, etc. are referred to as "virtual events." Also, images for displaying virtual events on the screen D of the terminal device 20 are referred to as "event images);
and wherein a sufficiency of questions is based on a predefined criteria, (TAYA (JP 2024067914 A): In the information processing system 1 according to the present embodiment, in a space (such as a virtual space) where people gather online, including a metaverse, a video conference system, a web conference system, etc., each person is represented in the same space, on the same screen. Each person is represented by a real video, a photograph, or an image G(1), G(2), ..., G(n-1), G(n) such as an avatar, icon, or character, superimposed on the event image V. There may be a person who manages the event or answers questions, such as a community manager, facilitator, tutor, teacher, or parent; The information processing system 1 according to this embodiment has a function of asking questions about things that the user does not understand using photos, videos, audio, etc. in the virtual space, selecting those questions, and teaching them. For example, a user who wants to ask a question uses the camera of the terminal device 20 to take a photo or video of a part of a paper document or teaching material about which the user wants to ask a question, and sends it to the server device 10 via the network N. The questions are organized and listed in the server device 10, and can be viewed on each or some of the terminal devices);
determining, by the processor set, a plurality of factors associated with emotional effectiveness by utilizing artificial intelligence (AI) with a model to identify patterns associated with the emotional effectiveness, (TAYA (JP 2024067914 A): when a specific movement or other behavior is performed, or when information such as concentration or emotions is obtained through behavior analysis. This electronic information is accumulated in the memory unit 12 of the server device 10 via the network N. This accumulated information may be data after data analysis. These analyses may use AI such as machine learning and deep learning);
TAYA does not disclose the following limitations, however Le discloses:
the AI with the model is trained on historic metaverse collaborative events, (Le [0121] The cloned avatar can then operate and be controlled in the metaverse by the cloned avatar management system based on the user's preferences (e.g., based on characteristics and/or rules selected by the user and/or determined by an AI navigation and control systems module as described with respect to other embodiments described herein). For example, the cloned avatar can operate in the metaverse according to a set of pre-defined rules, goals, and/or activities authorized by the user of the cloned avatar. In some implementations, the set of pre-defined activities can specify how the cloned avatar will represent the interest(s) of the user/owner in a “passive” mode and/or an “interactive” mode, as described in more detail with respect to other embodiments herein);
determining, by a processor set, that participants in a metaverse
collaborative event are not asking sufficient questions to a presenter of the metaverse collaborative
event; in response to the determining, causing, by the processor set, a dummy avatar in the metaverse
collaborative event to ask a question to the presenter during the metaverse collaborative event, the
instructions further allow the processor to connect an avatar to each persona, make each avatar
accessible for dialog with metaverse users in the Metaverse, receive persona interaction data based on
an interacting persona that interacts with a metaverse user, (LE (US 20240281052 A1) discloses in: ([0030] In some embodiments, the user device 102 can be configured with a user interface, e.g., a graphical user interface (e.g., displaying a user interface associated with the cloned avatar management application 123); [0121] In some implementations of any of the systems and/or methods described herein, a user may have one cloned avatar present and/or active in the metaverse while the user is inactive relative to the metaverse (e.g., not actively controlling an avatar in the metaverse in real-time... The user can enable a cloned avatar to be present and/or active in the metaverse by activating a cloned avatar management system, such as any of the cloned avatar management systems described herein, and/or by changing a status or other characteristics of the cloned avatar (e.g., via interacting with a dashboard and/or a cloned avatar management application 123) of the cloned avatar management system. The cloned avatar can then operate and be controlled in the metaverse by the cloned avatar management system based on the user's preferences (e.g., based on characteristics and/or rules selected by the user and/or determined by an Al navigation and control systems module as described with respect to other embodiments described herein)).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
TAYA does not disclose the following limitations, however, GIARD discloses:
patterns associated with the emotional effectiveness (GIARD (WO 2013155828 A1) discloses in :[0065]
In one embodiment, the subject video is streamed in substantially real-time to the server while being
recorded, and automatically analyzed to determine which Idle video is to be presented to the subject.
For example, audio and video analysis may be applied to the subject video to determine the emotional
state of the subject while answering the interview question. For example, If smiles or laughs are detected in the subject video, then on idle video in which the actor/avatar smiles may be presented to the subject.
In another example, if It is determined that the subject is sad while answering the Interview question,
then on Idle video in which the actor/avatar appears to be compassionately listening is presented. in a
further example, If hesitations, surprises, grimaces, etc. are detected, then an Idle video in which the
actor/avatar has 0 neutral attitude may be presented to the subject);
and display an emotion response corresponding to the plurality of factors, (GIARD (WO2013156828 A1):
[0079] in one embodiment, the emotional state of the subject is also determined for each subject video.
An expression recognition analysis is performed on the video and/or audio tracks of the subject video in
order to determine the emotional state of the subject while recording the subject video. For example,
laughs may be identified within the audio tracks, smiles may be detected from the video tracks in order
to determine whether the subject is happy or sad. Basic statistics such as agitation, head position,
mouth and eye status, etc. can also be retrieved from the video analysis. The determined emotional
state may then be tagged to the corresponding subject video. In the same or another embodiment, the
emotional state may correspond to an index keyword assigned to the subject video. In a health care
embodiment, the system can be used to detect patterns in emotional state).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by GIARD in the systems of TAYA, since the claimed invention is
merely a combination of old elements, and in the combination each element merely would have
performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
As per claim 2, TAYA discloses:
further comprising causing the dummy avatar to display one or more positive influencing factors to the
participants and the presenter, (TAYA: Electronic information is exchanged between the terminal device 20 and the server device 10 through this external device 30. In addition, by analyzing the electronic information, it is possible to obtain information such as the level of concentration, tension, whether the person is positive (independence), and study time).
TAYA does not disclose: identifying types of body language which are linked to the emotional
effectiveness.
However, Le discloses in: [0016] In some embodiments, avatars can represent movement of a user
through simple facial movements (e.g., nods). In some embodiments, avatars can replicate an entire
user and/or their body movements to create a vivid feeling of the user being physically present in the
metaverse.
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As per claim 4, TAYA does not disclose: further comprising inviting one or more of the
participants to the metaverse collaborative event based on determining that the one or more of the
participants are interested in the metaverse collaborative event,
However, Le discloses: (Le: In some implementations, a cloned avatar can advertise a particular profile
for jobs or services, which can be selected by another interested avatar. For example, advertised
content (e.g., presented via speech or text) can include "here are my skills, do you have work for me" or
"lam a subject matter expert in mathematics for beginners, would you be interested in my services?" ).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention
is merely a combination of old elements, and in the combination each element merely would have
performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
As per claim 5, TAYA does not disclose: further comprising dynamically rearranging the
participants to different locations in the three-dimensional space, however, Le discloses: ([LE, [0126] In
some implementations, a cloned avatar (e.g., a cloned avatar which was previously the primary avatar of
the user) can be directed and/or controlled by the cloned avatar management system based on the
location and/or activity of the primary avatar. For example, a cloned avatar may be instructed or
controlled to follow a primary avatar as the primary avatar moves around the metaverse (e.g., at a
particular distance, directly behind, and/or directly to the side of the primary avatar).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As per claim 6, TAYA discloses:
wherein the determining that the participants in the metaverse collaborative event are not asking
sufficient questions comprises comparing questions asked during the metaverse collaborative event to
questions determined from a knowledge corpus. (The information processing system 1 according to this embodiment has a function of asking questions about things that the user does not understand using photos, videos, audio, etc. in the virtual space, selecting those questions, and teaching them. For example, a user who wants to ask a question uses the camera of the terminal device 20 to take a photo or video of a part of a paper document or teaching material about which the user wants to ask a question, and sends it to the server device 10 via the network N. The questions are organized and listed in the server device 10, and can be viewed on each or some of the terminal devices
As per claim 7, TAYA discloses:
further comprising creating the knowledge corpus based on analyzing plural different historic metaverse
collaborative events, (The information processing system 1 according to this embodiment has a function of asking questions about things that the user does not understand using photos, videos, audio, etc. in the virtual space, selecting those questions, and teaching them. For example, a user who wants to ask a question uses the camera of the terminal device 20 to take a photo or video of a part of a paper document or teaching material about which the user wants to ask a question, and sends it to the server device 10 via the network N. The questions are organized and listed in the server device 10, and can be viewed on each or some of the terminal devices);
As per claim 8, TAYA discloses:
wherein the determining that the participants in the metaverse collaborative event are not asking
sufficient questions comprises determining that one of the participants does not understand a topic
presented by the presenter during the metaverse collaborative event, (The information processing system 1 according to this embodiment has a function of asking questions about things that the user does not understand using photos, videos, audio, etc. in the virtual space, selecting those questions, and teaching them);
As per claim 9, this claim recites limitations similar to those recited in independent claim 1 and is
therefore rejected for similar reasons.
As per claim 10, TAYA discloses: wherein program instructions are executable to cause the
dummy avatar to display one or more positive influencing factors to the participants and the presenter,
(TAYA: Electronic information is exchanged between the terminal device 20 and the server device 10 through this external device 30. In addition, by analyzing the electronic information, it is possible to obtain information such as the level of concentration, tension, whether the person is positive (independence), and study time);
TAYA does not disclose: identifying types of body language which are linked to the emotional
effectiveness.
However, Le discloses in: [0016] In some embodiments, avatars can represent movement of a user
through simple facial movements (e.g., nods). In some embodiments, avatars can replicate an entire
user and/or their body movements to create a vivid feeling of the user being physically present in the
metaverse.
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As per claim 12, TAYA does not disclose: wherein program instructions are executable to invite
one or more of the participants to the metaverse collaborative event based on determining that the one
or more of the participants are interested in the metaverse collaborative event.
However, Le discloses: ((Le: In some implementations, a cloned avatar can advertise a particular profile
for jobs or services, which can be selected by another interested avatar. For example, advertised
content (e.g., presented via speech or text) can include "here are my skills, do you have work for me" or
"lam a subject matter expert in mathematics for beginners, would you be interested in my services ?").
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable).
As per claim 13, TAYA does not disclose:
wherein program instructions are executable to dynamically rearrange the participants to different
locations in the three-dimensional space, ({[LE, [0126] In some implementations, a cloned avatar (e.g., a
cloned avatar which was previously the primary avatar of the user) can be directed and/or controlled by
the cloned avatar management system based on the location and/or activity of the primary avatar. For
example, a cloned avatar may be instructed or controlled to follow a primary avatar as the primary
avatar moves around the metaverse (e.g., at a particular distance, directly behind, and/or directly to the
side of the primary avatar).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As pr claim 14, TAYA, discloses:
wherein the determining that the participants in the metaverse collaborative event are not asking
sufficient questions comprises one of: comparing questions asked during the metaverse collaborative
event to questions determined from a knowledge corpus; and determining that one of the participants
does not understand a topic presented by the presenter during the metaverse collaborative event,
(The information processing system 1 according to this embodiment has a function of asking questions about things that the user does not understand using photos, videos, audio, etc. in the virtual space, selecting those questions, and teaching them);
As per claim 15, this claim recites limitations similar to those disclosed in independent claim 1 and is
therefore rejected for similar reasons.
As per claim 16, TAYA, discloses: wherein program instructions are executable to cause the
dummy avatar to display one or more positive influencing factors to the participants and the presenter,
(TAYA: Electronic information is exchanged between the terminal device 20 and the server device 10 through this external device 30. In addition, by analyzing the electronic information, it is possible to obtain information such as the level of concentration, tension, whether the person is positive (independence), and study time).
TAYA does not disclose: identifying types of body language which are linked to the emotional
effectiveness.
However, Le discloses in: [0016] In some embodiments, avatars can represent movement of a user
through simple facial movements (e.g., nods). In some embodiments, avatars can replicate an entire
user and/or their body movements to create a vivid feeling of the user being physically present in the
metaverse.
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As per claim 18, TAYA does not disclose the following, however, Le discloses:
wherein program instructions are executable to invite one or more of the participants to the metaverse
collaborative event based on determining that the one or more of the participants are interested in the
metaverse collaborative event, (Le: In some implementations, a cloned avatar can advertise a particular
profile for jobs or services, which can be selected by another interested avatar. For example, advertised
content (e.g., presented via speech or text) can include "here are my skills, do you have work for me" or
"lama subject matter expert in mathematics for beginners, would you be interested in my services?").
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As per claim 19, TAYA does not disclose the following, however, Le discloses:
wherein program instructions are executable to dynamically rearrange the participants to different
locations in the three-dimensional space, ([LE, [0126] In some implementations, a cloned avatar (e.g., a
cloned avatar which was previously the primary avatar of the user) can be directed and/or controlled by
the cloned avatar management system based on the location and/or activity of the primary avatar. For
example, a cloned avatar may be instructed or controlled to follow a primary avatar as the primary
avatar moves around the metaverse (e.g., at a particular distance, directly behind, and/or directly to the
side of the primary avatar).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Le in the systems of TAYA, since the claimed invention is merely a
combination of old elements, and in the combination each element merely would have performed the
same function as it did separately, and one of ordinary skill in the art would have recognized that the
results of the combination were predictable.
As per claim 20, TAYA discloses:
wherein the determining that the participants in the metaverse collaborative event are not asking
sufficient questions comprises one of: comparing questions asked during the metaverse collaborative
event to questions determined from a knowledge corpus; and determining that one of the participants
does not understand a topic presented by the presenter during the metaverse collaborative event,
(The information processing system 1 according to this embodiment has a function of asking questions about things that the user does not understand using photos, videos, audio, etc. in the virtual space, selecting those questions, and teaching them0.
Claim(s) 3, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over TAYA (JP 2024067914 A), and further in view of (LE US 20240281052 A1), and further in view of GIARD (WO 2013156828 A1), and further in view of Zavesky et al (US 20230410159 A1).
As per claim 3, TAYA does not disclose the following limitations, however, Zavesky discloses:
further comprising masking negative influencing factors of one or more of the participants, (Zavesky (US
20230410159 A1): [0198] In one or more embodiments, features that are positively recited can also be
negatively recited and excluded from the embodiment with or without replacement by another
structural and/or functional feature;
identifying positive influencing factors that cause the metaverse collaborative event to improve the
emotional effectiveness, (Zavesky (US 20230410159 A1): [0087] As yet another example, the immersion
evaluation platform 202 may identify repeated visits to the metaverse object as an indication of positive
exposure to the metaverse object, and factor this finding in its generation of the personalized
recommendation or review of the metaverse object for the user).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Zavesky in the systems of TAYA, since the claimed invention is
merely a combination of old elements, and in the combination each element merely would have
performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
As per claim 11, TAYA does not disclose the following limitations, however, Zavesky discloses:
further comprising masking negative influencing factors of one or more of the participants, (Zavesky (US
20230410159 A1): [0198] In one or more embodiments, features that are positively recited can also be
negatively recited and excluded from the embodiment with or without replacement by another
structural and/or functional feature;
identifying positive influencing factors that cause the metaverse collaborative event to improve the
emotional effectiveness, (Zavesky (US 20230410159 A1): [0087] As yet another example, the immersion
evaluation platform 202 may identify repeated visits to the metaverse object as an indication of positive
exposure to the metaverse object, and factor this finding in its generation of the personalized
recommendation or review of the metaverse object for the user).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Zavesky in the systems of TAYA, since the claimed invention is
merely a combination of old elements, and in the combination each element merely would have
performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
As per claim 17, TAYA does not disclose the following limitations, however, Zavesky discloses:
further comprising masking negative influencing factors of one or more of the participants, (Zavesky (US
20230410159 A1): [0198] In one or more embodiments, features that are positively recited can also be
negatively recited and excluded from the embodiment with or without replacement by another
structural and/or functional feature;
identifying positive influencing factors that cause the metaverse collaborative event to improve the
emotional effectiveness, (Zavesky (US 20230410159 A1): [0087] As yet another example, the immersion
evaluation platform 202 may identify repeated visits to the metaverse object as an indication of positive
exposure to the metaverse object, and factor this finding in its generation of the personalized
recommendation or review of the metaverse object for the user).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include the
above limitations as taught by Zavesky in the systems of TAYA, since the claimed invention is
merely a combination of old elements, and in the combination each element merely would have
performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
Prior Art Considered
The following are prior art reference considered by the Examiner, however has not been used in
the present invention:
Mandel et al (US 20150046375 A1)
Response to Arguments
Applicant's arguments filed 12/17/25 have been fully considered but they are not persuasive..
With regard to the 101 rejection, Applicant disagrees with that the claimed invention is directed to accessing, analyzing, updating and communicating electronic shipping records. However, the word “shipping” was inadvertently included in this statement. As now shown above in the present Office Action, the claims are directed to monitoring participant behavior, evaluating engagement and emotional effectiveness, prompting interaction when engagement is low, using AI to make a decision, and displaying an automated response through a dummy avatar in a computer environment, and is thus, are in the “Methods of Human Activity” category.
Applicant further argues that features of claim 1 are not directed to or related to the alleged judicial exception of steps “that allow for access, analysis, update, and communication of electronic shipping records”, and Examiner has therefore not made a prima facie case of patent eligibility under the substantive law. However, as described above in the preceding paragraph, the claim is directed towards monitoring participant behavior, evaluating engagement and emotional effectiveness, prompting interaction when engagement is low, using AI to make a decision, and displaying an automated response through a dummy avatar. These steps “Methods of Human Activity”. In addition, the claim uses generic processors, AI models and avatars to automate a known social interaction through asking questions to increase engagement. Using AI to perform an abstract idea faster or automatically is not an inventive concept.
Applicant’s arguments, see arguments/remarks, filed 12/17/25, with respect to the rejection(s) of the claims have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of TAYA (JP 2024067914 A).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Akiba Robinson whose telephone number is 571-272-6734 and email is Akiba.Robinsonboyce@USPTO.gov. The examiner can normally be reached on Monday-Thursday 6:30am-4:30pm.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner's supervisor, Resha Desai can be reached on 571-270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (703) 305-3900.
January 22, 2026
/AKIBA K ROBINSON/Primary Examiner, Art Unit 3628