DETAILED ACTION
Applicant’s Amendment filed on January 26, 2026 has been reviewed.
Claims 3-4 and 13-14 were cancelled in the previous amendment.
Claims 8 and 18 are cancelled in the amendment.
Claims 1, 11 and 20 are amended in the amendment.
Claims 1-2, 5-7, 9-12, 15-17 and 19-20 have been examined.
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 26, 2026 has been entered.
Claim Objections
Claim 1 is objected to because of the following informalities:
In claims 1, at lines 30-38, it is unclear that the “wherein after the displaying a target emotion mark, the method further comprises: receiving…; and in response to the first input, displaying…; wherein an emotion type…” are belong further steps of “displaying a target emotion mark…” or “displaying the target emotion mark in a first region…” or “displaying the target emotion mark on a current interface…” For the purpose of examination, “wherein after the displaying a target emotion mark, the method further comprises: receiving…; and in response to the first input, displaying…; wherein an emotion type…” are interpreted as further steps of “displaying a target emotion mark…”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 5-7, 9-12, 15-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chembula (US 2015/0332088 A1) in view of Miao (CN 109656440 A), hereinafter referred to as Miao, and further in view of An et al. (WO 2016197455 A1), hereinafter referred to as An..
With respect to claim 1, Chembula teaches An unread message prompt method, wherein the method comprises:
receiving a first message (User device 230 generate the message based on the user inputting text or images, into user device 230, to be included in the message, para. 0062);
displaying a target emotion mark, wherein the target emotion mark is used to indicate an emotion of a first contact towards the first message, and the first contact is a contact who sends the first message (process 600 include receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063); wherein
the target emotion mark is determined based on target information (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068); and the target information comprises at least one of following: an image of the first contact captured by an electronic device of the first contact (when a message is being generated (e.g., automatically when a user is inputting text for a message); the camera generate the image and user device 230 obtain the image from the camera, para. 0065), or an unread duration of the first message, the unread duration being a duration between a time at which the first message is received and a current time; and
wherein after the displaying a target emotion mark, the method further comprises:
receiving a first input for the target emotion mark (receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063); and
in response to the first input, displaying the first message and displaying N sets of second messages in one-to-one correspondence with N second contacts, N being a positive integer (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068);
Chembula does not explicitly teach
the displaying a target emotion mark comprises:
displaying the target emotion mark in a first region, the first region being at least one of following: at least a partial region displaying a target icon on a desktop, or a region that is on the desktop and is adjacent to the region at which the target icon is displayed, wherein the target icon is an icon of a target application, and the first message is received via the target application;
or
displaying the target emotion mark on a current interface in a form of a pop-up window; wherein
after the displaying the target emotion mark on a current interface in a form of a pop-up window, the method further comprises:
displaying a target emotion mark for indicating a changed emotion when the emotion of the first contact towards the first message changes after the display of the target emotion mark has been canceled;
Hover, Miao teaches
the displaying a target emotion mark comprises:
displaying the target emotion mark in a first region (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7), the first region being at least one of following: at least a partial region displaying a target icon on a desktop (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, , the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7), or a region that is on the desktop and is adjacent to the region at which the target icon is displayed, wherein the target icon is an icon of a target application, and the first message is received via the target application (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7) in order to attract the user to open the application as taught by Miao (page 33: para. 7);
Therefore, based on Chembula in view of Miao, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Miao to the method of Chembula in order to attract the user to open the application as taught by Miao (page 33: para. 7).
or
displaying the target emotion mark on a current interface in a form of a pop-up window; wherein
after the displaying the target emotion mark on a current interface in a form of a pop-up window, the method further comprises:
displaying a target emotion mark for indicating a changed emotion when the emotion of the first contact towards the first message changes after the display of the target emotion mark has been canceled;
Chembula in view of Miao does not explicitly teach
wherein
an emotion type of the emotion of the first contact towards the first message is a first type, N second types and the first type are a same emotion type, and each second type is an emotion type of an emotion of one second contact towards a set of second messages corresponding to the one second contact.
However, An teaches
wherein
an emotion type of the emotion of the first contact towards the first message is a first type (user device 230 automatically create an emoticon in real time, user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), N second types and the first type are a same emotion type, and each second type is an emotion type of an emotion of one second contact towards a set of second messages corresponding to the one second contact (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the method of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 2, Chembula in view of Miao teaches The method according to claim 1 as described above,
Chembula in view of Miao does not explicitly teach wherein the target information comprises the unread duration of the first message; wherein
different unread durations correspond to different emotions of the first contact, and the different emotions are indicated by different target emotion marks.
However, An teaches wherein the target information comprises the unread duration of the first message (determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49); wherein
different unread durations correspond to different emotions of the first contact, and the different emotions are indicated by different target emotion marks (determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the method of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 5, Chembula teaches The method according to claim 1, wherein the target emotion mark comprises at least one of following:
an image corresponding to the first contact (when a message is being generated (e.g., automatically when a user is inputting text for a message); the camera generate the image and user device 230 obtain the image from the camera, para. 0065), an image obtained after expression changes in the image corresponding to the first contact, or a first mark, wherein the first mark is used to indicate an emotion (detecting a facial expression of the user while the user is inputting the message; the user device determine an emoticon, created based on an image of the user's face, that corresponds to the detected facial expression and add the emoticon to the message, para. 0013; user device 230 apply the display option to the message by setting a color, a font, a size, an emphasis, etc. of the text in the message and/or setting a color and/or pattern of a background of the message based on the display option determined, para. 0081).
With respect to claim 6, Chembula teaches The method according to claim 1, wherein the first contact corresponds to K unread messages (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068), wherein the first message is comprised in the K unread messages (user device 230 automatically create an emoticon in real time, without user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), an emotion of the first contact towards each unread message is indicated through an emotion mark, respectively, and K is a positive integer (user device 230 perform process 400 while text for messages is being input by a user, an emoticon that accurately reflects the user's facial expression at the time the message is generated may be added to the message, para. 0076); and
Chembula in view of Miao does not explicitly teach the displaying a target emotion mark comprises:
displaying K emotion marks in sequence, wherein the K emotion marks are in one-to-one correspondence with the K unread messages, and the K emotion marks comprise the target emotion mark.
However, An teaches teach the displaying a target emotion mark comprises:
displaying K emotion marks in sequence, wherein the K emotion marks are in one-to-one correspondence with the K unread messages, and the K emotion marks comprise the target emotion mark (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the method of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 7, Chembula teaches The method according to claim 1, wherein the first contact corresponds to K unread messages (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068), the first message is a message with a earliest reception time in the K unread messages, and K is a positive integer (user device 230 automatically create an emoticon in real time, without user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076); wherein
emotions of the first contact towards the K unread messages are all indicated through a target emotion mark (user device 230 perform process 400 while text for messages is being input by a user, an emoticon that accurately reflects the user's facial expression at the time the message is generated may be added to the message, para. 0076).
With respect to claim 9, Chembula teaches The method according to claim 1, wherein the target emotion mark is an emotion mark in Q emotion marks (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068), and each emotion mark is used to indicate a first emotion of a contact towards at least one unread message corresponding to the contact (user device 230 automatically create an emoticon in real time, without user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), Q being an integer greater than one (user device 230 perform process 400 while text for messages is being input by a user, an emoticon that accurately reflects the user's facial expression at the time the message is generated may be added to the message, para. 0076); and
Chembula in view of Miao does not explicitly teach
the displaying a target emotion mark comprises:
cyclically displaying the Q emotion marks based on display durations corresponding to the Q emotion marks, display duration corresponding to each emotion mark being determined based on any one of following: a duration associated with the contact and a duration associated with an emotion type of the first emotion.
However, An teaches
the displaying a target emotion mark comprises:
cyclically displaying the Q emotion marks based on display durations corresponding to the Q emotion marks (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49), display duration corresponding to each emotion mark being determined based on any one of following: a duration associated with the contact and a duration associated with an emotion type of the first emotion (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the method of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 10, Chembula teaches The method according to claim 1, wherein
before the displaying a target emotion mark, the method further comprises:
receiving a second input by a user (receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063);
setting, in response to the second input, at least one first correspondence, wherein each first correspondence corresponds to a contact (receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063; user device 230 apply the display option to the message by setting a color, a font, a size, an emphasis, etc. of the text in the message and/or setting a color and/or pattern of a background of the message based on the display option determined, para. 0081), and
the displaying a target emotion mark comprises:
obtaining, from the at least one first correspondence, a target correspondence corresponding to the first contact (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068); and
displaying the target emotion mark based on the target correspondence (a user may set a permission or preference allowing display options to be automatically applied to messages based on detected facial expressions, para. 0082).
Chembula in view of Miao does not explicitly teach wherein the target information comprises the unread duration of the first message; wherein
the each first correspondence is a correspondence between an emotion of the contact and a target unread duration, and the target unread duration is an unread duration of an unread message corresponding to the contact; and
displaying the target emotion mark based on the unread duration of the first message and the target correspondence.
However, An teaches wherein the target information comprises the unread duration of the first message (determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49); wherein
the each first correspondence is a correspondence between an emotion of the contact and a target unread duration, and the target unread duration is an unread duration of an unread message corresponding to the contact (determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49); and
displaying the target emotion mark based on the unread duration of the first message and the target correspondence (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the method of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 11, Chembula teaches An electronic device (device, para. 0026), comprising a processor (device perform these processes in response to processor 320 executing software instructions stored by a computer-readable medium, such as memory 330 and/or storage component 340, para. 0026), a memory (a computer-readable medium, such as memory 330 and/or storage component 340, para. 0026), and a program or instructions stored in the memory and executable on the processor, wherein the program or instructions, when executed by the processor, cause the electronic device to perform (device perform these processes in response to processor 320 executing software instructions stored by a computer-readable medium, such as memory 330 and/or storage component 340, para. 0026):
receiving a first message (User device 230 generate the message based on the user inputting text or images, into user device 230, to be included in the message, para. 0062);
displaying a target emotion mark, wherein the target emotion mark is used to indicate an emotion of a first contact towards the first message, and the first contact is a contact who sends the first message (process 600 include receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063); wherein
the target emotion mark is determined based on target information (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068); and the target information comprises at least one of following: an image of the first contact captured by an electronic device of the first contact (when a message is being generated (e.g., automatically when a user is inputting text for a message); the camera generate the image and user device 230 obtain the image from the camera, para. 0065), or an unread duration of the first message, the unread duration being a duration between a time at which the first message is received and a current time;
the program or instructions, when executed by the processor, cause the electronic device to further perform:
receiving a first input for the target emotion mark (receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063); and
in response to the first input, displaying the first message and displaying N sets of second messages in one-to-one correspondence with N second contacts, N being a positive integer (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068);
Chembula does not explicitly teach
the program or instructions, when executed by the processor, cause the electronic device to perform:
displaying the target emotion mark in a first region, the first region being at least one of following: at least a partial region displaying a target icon on a desktop, or a region that is on the desktop and is adjacent to the region at which the target icon is displayed, wherein the target icon is an icon of a target application, and the first message is received via the target application;
or
displaying the target emotion mark on a current interface in a form of a pop-up window; wherein
after the displaying the target emotion mark on a current interface in a form of a pop-up window, the method further comprises:
displaying a target emotion mark for indicating a changed emotion when the emotion of the first contact towards the first message changes after the display of the target emotion mark has been canceled; and
Hover, Miao teaches
the program or instructions, when executed by the processor, cause the electronic device to perform:
displaying the target emotion mark in a first region (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7), the first region being at least one of following: at least a partial region displaying a target icon on a desktop (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, , the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7), or a region that is on the desktop and is adjacent to the region at which the target icon is displayed, wherein the target icon is an icon of a target application, and the first message is received via the target application (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7) in order to attract the user to open the application as taught by Miao (page 33: para. 7);
Therefore, based on Chembula in view of Miao, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Miao to the device of Chembula in order to attract the user to open the application as taught by Miao (page 33: para. 7).
or
displaying the target emotion mark on a current interface in a form of a pop-up window; wherein
after the displaying the target emotion mark on a current interface in a form of a pop-up window, the method further comprises:
displaying a target emotion mark for indicating a changed emotion when the emotion of the first contact towards the first message changes after the display of the target emotion mark has been canceled; and
Chembula in view of Miao does not explicitly teach
wherein
an emotion type of the emotion of the first contact towards the first message is a first type, N second types and the first type are a same emotion type, and each second type is an emotion type of an emotion of one second contact towards a set of second messages corresponding to the one second contact.
However, An teaches
wherein
an emotion type of the emotion of the first contact towards the first message is a first type (user device 230 automatically create an emoticon in real time, user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), N second types and the first type are a same emotion type, and each second type is an emotion type of an emotion of one second contact towards a set of second messages corresponding to the one second contact (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the device of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 12, Chembula in view of Miao teaches The electronic device according to claim 11 as described above,
Chembula in view of Miao does not explicitly teach wherein the target information comprises the unread duration of the first message; wherein
different unread durations correspond to different emotions of the first contact, and the different emotions are indicated by different target emotion marks.
However, An teaches wherein the target information comprises the unread duration of the first message (determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49); wherein
different unread durations correspond to different emotions of the first contact, and the different emotions are indicated by different target emotion marks (determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the device of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 15, Chembula teaches The electronic device according to claim 11, wherein the target emotion mark comprises at least one of following:
an image corresponding to the first contact (when a message is being generated (e.g., automatically when a user is inputting text for a message); the camera generate the image and user device 230 obtain the image from the camera, para. 0065), an image obtained after expression changes in the image corresponding to the first contact, or a first mark, wherein the first mark is used to indicate an emotion (detecting a facial expression of the user while the user is inputting the message; the user device determine an emoticon, created based on an image of the user's face, that corresponds to the detected facial expression and add the emoticon to the message, para. 0013; user device 230 apply the display option to the message by setting a color, a font, a size, an emphasis, etc. of the text in the message and/or setting a color and/or pattern of a background of the message based on the display option determined, para. 0081).
With respect to claim 16, Chembula teaches The electronic device according to claim 11, wherein the first contact corresponds to K unread messages (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068), wherein the first message is comprised in the K unread messages (user device 230 automatically create an emoticon in real time, without user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), an emotion of the first contact towards each unread message is indicated through an emotion mark, respectively, and K is a positive integer (user device 230 perform process 400 while text for messages is being input by a user, an emoticon that accurately reflects the user's facial expression at the time the message is generated may be added to the message, para. 0076); and
Chembula in view of Miao does not explicitly teach the displaying a target emotion mark comprises:
displaying K emotion marks in sequence, wherein the K emotion marks are in one-to-one correspondence with the K unread messages, and the K emotion marks comprise the target emotion mark.
However, An teaches teach the displaying a target emotion mark comprises:
displaying K emotion marks in sequence, wherein the K emotion marks are in one-to-one correspondence with the K unread messages, and the K emotion marks comprise the target emotion mark (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the device of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 17, Chembula teaches The electronic device according to claim 11, wherein the first contact corresponds to K unread messages (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068), the first message is a message with a earliest reception time in the K unread messages, and K is a positive integer (user device 230 automatically create an emoticon in real time, without user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076); wherein
emotions of the first contact towards the K unread messages are all indicated through a target emotion mark (user device 230 perform process 400 while text for messages is being input by a user, an emoticon that accurately reflects the user's facial expression at the time the message is generated may be added to the message, para. 0076).
With respect to claim 19, Chembula teaches The electronic device according to claim 11, wherein the target emotion mark is an emotion mark in Q emotion marks (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068), and each emotion mark is used to indicate a first emotion of a contact towards at least one unread message corresponding to the contact (user device 230 automatically create an emoticon in real time, without user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), Q being an integer greater than one (user device 230 perform process 400 while text for messages is being input by a user, an emoticon that accurately reflects the user's facial expression at the time the message is generated may be added to the message, para. 0076); and
Chembula in view of Miao does not explicitly teach
the displaying a target emotion mark comprises:
cyclically displaying the Q emotion marks based on display durations corresponding to the Q emotion marks, display duration corresponding to each emotion mark being determined based on any one of following: a duration associated with the contact and a duration associated with an emotion type of the first emotion.
However, An teaches
the displaying a target emotion mark comprises:
cyclically displaying the Q emotion marks based on display durations corresponding to the Q emotion marks (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49), display duration corresponding to each emotion mark being determined based on any one of following: a duration associated with the contact and a duration associated with an emotion type of the first emotion (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the device of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
With respect to claim 20, Chembula teaches A non-transitory readable storage medium (memory 330 and/or storage component 340, para. 0026), wherein the non-transitory readable storage medium stores a program or instructions, and the program or instructions, when executed by a processor of an electronic device, cause the electronic device to perform (device perform these processes in response to processor 320 executing software instructions stored by a computer-readable medium, such as memory 330 and/or storage component 340, para. 0026):
receiving a first message (User device 230 generate the message based on the user inputting text or images, into user device 230, to be included in the message, para. 0062);
displaying a target emotion mark, wherein the target emotion mark is used to indicate an emotion of a first contact towards the first message, and the first contact is a contact who sends the first message (process 600 include receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063); wherein
the target emotion mark is determined based on target information (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068); and the target information comprises at least one of following: an image of the first contact captured by an electronic device of the first contact (when a message is being generated (e.g., automatically when a user is inputting text for a message); the camera generate the image and user device 230 obtain the image from the camera, para. 0065), or an unread duration of the first message, the unread duration being a duration between a time at which the first message is received and a current time;
the program or instructions, when executed by the processor, cause the electronic device to further perform:
receiving a first input for the target emotion mark (receiving an input from a user and/or an image of the user used for adding an emoticon to the message and/or for setting a display option of the message, para. 0063); and
in response to the first input, displaying the first message and displaying N sets of second messages in one-to-one correspondence with N second contacts, N being a positive integer (identifying an emotion based on the facial expression, and determine the emoticon to add to the message based on the emotion; user device 230 detect a facial expression to automatically determine an emoticon to add to a message when a message is being generated such as automatically when a user is inputting text for a message, para. 0068);
Chembula does not explicitly teach
the program or instructions, when executed by the processor, cause the electronic device to perform:
displaying the target emotion mark in a first region, the first region being at least one of following: at least a partial region displaying a target icon on a desktop, or a region that is on the desktop and is adjacent to the region at which the target icon is displayed, wherein the target icon is an icon of a target application, and the first message is received via the target application;
or
displaying the target emotion mark on a current interface in a form of a pop-up window; wherein
after the displaying the target emotion mark on a current interface in a form of a pop-up window, the method further comprises:
displaying a target emotion mark for indicating a changed emotion when the emotion of the first contact towards the first message changes after the display of the target emotion mark has been canceled; and
Hover, Miao teaches
the program or instructions, when executed by the processor, cause the electronic device to perform:
displaying the target emotion mark in a first region (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7), the first region being at least one of following: at least a partial region displaying a target icon on a desktop (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, , the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7), or a region that is on the desktop and is adjacent to the region at which the target icon is displayed, wherein the target icon is an icon of a target application, and the first message is received via the target application (the current display icon in the application; the corner is displayed on the top; the corner mark used to display an expression image such as a smile face or a cry face without displaying the number, using a smile face, a crying face, etc; the expression image attracts the user to open the application, the user can be opened by using the corner mark to increase the frequency of use and user stickiness of the application, page 33: para. 7) in order to attract the user to open the application as taught by Miao (page 33: para. 7);
Therefore, based on Chembula in view of Miao, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Miao to the medium of Chembula in order to attract the user to open the application as taught by Miao (page 33: para. 7).
or
displaying the target emotion mark on a current interface in a form of a pop-up window; wherein
after the displaying the target emotion mark on a current interface in a form of a pop-up window, the method further comprises:
displaying a target emotion mark for indicating a changed emotion when the emotion of the first contact towards the first message changes after the display of the target emotion mark has been canceled;
Chembula in view of Miao does not explicitly teach
wherein
an emotion type of the emotion of the first contact towards the first message is a first type, N second types and the first type are a same emotion type, and each second type is an emotion type of an emotion of one second contact towards a set of second messages corresponding to the one second contact.
However, An teaches
wherein
an emotion type of the emotion of the first contact towards the first message is a first type (user device 230 automatically create an emoticon in real time, user input, based on the image data received while the message is being generated, user device 230 perform process 400 while text for messages is being input by a user, para. 0076), N second types and the first type are a same emotion type, and each second type is an emotion type of an emotion of one second contact towards a set of second messages corresponding to the one second contact (the predetermined time interval by the interval one emoticon start sequentially display emoticons located in one of the emoticon after, page 3, lines 9-12; determining a current waiting duration, comparing the current waiting duration with a plurality of threshold time periods recorded in the pre-stored database, and determining a corresponding threshold time period corresponding to the current listening waiting duration, wherein the pre-stored database; an emoticon picture corresponding to each threshold time period is also recorded; the emoticon picture corresponding to the corresponding threshold time period is determined as an emoticon image to be displayed, page 1, lines 47-53; also see page 4, lines 42-49) in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Therefore, based on Chembula in view of Miao, and further in view of An, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of An to the device of Chembula in view of Miao in order to facilitate the user’s perception between the two parties side by side for user generated emotions changes and avoid unpleasant communication between the two parties as taught by An (page 3, lines 44-50).
Response to Arguments
Applicant’s arguments with respect to claims 1-2, 5-7, 9-12, 15-17 and 19-20 have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAO HONG NGUYEN whose telephone number is (571)272-2666. The examiner can normally be reached on Monday-Friday 8AM-4:30PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H. Hwang can be reached on (571)272-40364036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.H.N/Examiner, Art Unit 2447
March 25, 2026
/JOON H HWANG/Supervisory Patent Examiner, Art Unit 2447