DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This Office Action is in response to the preliminary amendment filed on 04/26/2024.
2. The IDS filed on 07/19/2024 is considered and entered into the application file.
3. Claims 1-11, 13-14, and 16-22 are pending
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
4. Claims 1-6, 13, 14, and 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ahuja et al (US 20160291822 A1).
Ahuja et al (“Ahuja”) generally describes messaging, and more particularly sending non-standardized predefined graphical images embedded in line with text-based messages through various applications.
As per claim 1, Ahuja discloses an emoji processing method (e.g., see flowcharts of figs. 6a-6c) applied to a service system comprising a plurality of different service scenarios, the emoji processing method comprising:
receiving an evocation operation on an emoji panel in a current service scenario ([0050] FIG. 4A also shows user activation (or selection) 4016 of the image icon 4008); and
displaying the emoji panel in response to the evocation operation on the emoji panel, wherein the emoji panel comprises personalized emoji(s) related to the current service scenario. [0052] FIG. 4B illustrates that in response to activation 4016 by User A of the image icon 4008, a selection interface for selecting one or more predefined graphical images for the text input field 4010 is displayed
As per claim 2, Ahuja further discloses that the emoji processing method according to claim 1, wherein the emoji panel comprises a first region for displaying the personalized emoji(s) related to the current service scenario (examiner’s Note: preview area 4013 (a first region) displays personalized emoji related to a text or chat message exchange shown in Fig 4A-4F), and
a second region for displaying emoji(s) common to various service scenarios (region 4018 (a second region) displays several emojis that are applicable for other service scenarios, see at least Fig 4A-4F),
As per claim 3, Ahuja further discloses that the emoji processing method according to claim 1, wherein the receiving the evocation operation on the emoji panel in the current service scenario comprises:
receiving a trigger operation on an emoji button for evoking the emoji panel in the current service scenario ([0053] FIG. 4B shows user activation 4016 of the predefined graphical image 4017-A. In response to detecting User A's selection of image 4017-A, image 4017-A or a representation of image 4017-A is inserted into input field 4010).
As per claim 4, Ahuja further discloses that the emoji processing method according to claim 1, further comprising:
determining the current service scenario before displaying the emoji panel in response to the evocation operation on the emoji panel (Examiner’s Note; as illustrated in Figs. 4A-4F, the current service scenario is a messaging conversation between a user of the device 4001 (e.g. client device 102-1) (hereinafter “User A” for convenience) and a user of another client device (e.g., client device 102-2) (hereinafter “User B” for convenience). ) ; and
determining the emoji panel corresponding to the current service scenario based on the current service scenario (the message exchanges between user A and user B are identified or determined, accordingly the user A responded to the user B with text and emoji , that is, “Sunds fun! and happy face emoji, Figs. 4A-4F) .
As per claim 5, Ahuja further discloses that the emoji processing method according to claim 4, wherein the determining the current service scenario comprises: determining the current service scenario based on entry information of an emoji button, wherein the entry information comprises at least one of a position or an identifier of the emoji button in the service scenario ([0053] FIG. 4B illustrates contents in input field 4010. In this example, User A already entered the text “Sounds fun!” in FIG. 4A while keyboard 4006 was displayed in user interface 4000-A. FIG. 4B shows user activation 4016 of the predefined graphical image 4017-A. In response to detecting User A's selection of image 4017-A, image 4017-A or a representation of image 4017-A is inserted into input field 4010).
As per claim 6, Ahuja further discloses that the emoji processing method according to claim 1, wherein the current service scenario is a service scenario of cloud document or a service scenario of video conference( [0070] FIG. 4T illustrates an exemplary user interface 4000-T, displayed in response to detection of user selection of application 4026-1, in FIG. 4S. In this embodiment, application 4026-1 is a social networking application that allows users to publish posts (e.g., posts on a news feed. [0070] FIG. 4T illustrates an exemplary user interface 4000-T, displayed in response to detection of user selection of application 4026-1, in FIG. 4S. In this embodiment, application 4026-1 is a social networking application that allows users to publish posts (e.g., posts on a news feed). ALSO SEE [0079]).
As per electronic device claims 13, and 16-20, these claims include limitations similar to that of method claims 1-6, respectively, thus the electronic device claims are also rejected under similar citations given to the above method claims.
As per storage medium claim 14, the claim includes limitations similar to that of method claim 1, thus the storage medium claim is also rejected under similar citations given to the method claim 1.
5. Claims 1, 7-11, 13, 14, 21 and 22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Judovsky et al (US 20200380199 A1 ).
Judovsky et al (“Judovsky”) relates to clickable instant messaging. More particularly, to systems and methods for providing hyperlinked icons, images, and emoji in a real-time messaging environment, such as text messaging.
As per claim 1, Judovsky discloses an emoji processing method applied to a service system comprising a plurality of different service scenarios (Judovsky discloses clickable instant messaging. More particularly, to systems and methods for providing hyperlinked icons, images, and emoji in a real-time messaging environment, such as text messaging), the emoji processing method comprising:
receiving an evocation operation on an emoji panel in a current service scenario (Examiner’s note, as illustrated in at least Fig. 19, for example one of weather related emoji sunny or sun is selected or highlighted to be incorporated/ linked to the current message shown in the Messaging application shown in a communication panel 1910 ); and
displaying the emoji panel in response to the evocation operation on the emoji panel, [0035] FIG. 19 illustrates a screenshot of a mobile messaging device with a library of related linkable weather emoji accessed to prepare a real-time message with an embedded linkable emoji in accordance with one embodiment. wherein the emoji panel comprises personalized emoji(s) related to the current service scenario. ([0060] the collection library bar 1920 and selection panel 1925 may both be accessed to prepare a real-time message shown in communication panel 1910. Accordingly, a message with an embedded linkable emoji is shown in accordance with one embodiment).
As per claim 7, Ahuja further discloses that the emoji processing method according to claim 1, further comprising:
receiving a custom emoji uploaded by an administrator ([0056] Upon authorization of the user and approval of the messaging administrators, one embodiment allows the local linkable emoji to be uploaded to the emoji datastore 300A for general distribution.); and
updating emoji panels corresponding to all users within an organization to which the administrator belongs based on the custom emoji and the organization ([0074] Upon receiving authorization, routine 1100 associates the at least one link with the new emoji in the datastore in execution block 1145. In one embodiment, the same link may be associated with many different emoji, in which case associating/updating includes adding the new emoji to the list of related emoji in the datastore. [0075] Accordingly, upon receiving a copy of the linkable emoji, an administrator may authorize addition of the linkable emoji to the shared datastore).
As per claim 8, Ahuja further discloses that the emoji processing method according to claim 7, wherein: the emoji processing method further comprises:
obtaining target application scenario(s) for the custom emoji ([0035] FIG. 19 illustrates a screenshot of a mobile messaging device with a library of related linkable weather emoji accessed to prepare a real-time message with an embedded linkable emoji and
the updating the emoji panel corresponding to the all users within the organization to which the administrator belongs based on the custom emoji and the organization ([0056] In one embodiment, the client device 400 may identify and/or create new emoji and subsequently link the emoji to content. Upon authorization of the user and approval of the messaging administrators, one embodiment allows the local linkable emoji to be uploaded to the emoji datastore 300A for general distribution. Moreover, in one embodiment, the shared image library 305B and content link library 315B on the user device 400 may be periodically updated by the messaging server 200). comprises:
updating the emoji panel corresponding to the target application scenario(s) of the all users within the organization to which the administrator belongs based on the custom emoji and the organization ([0074] Depending on the embodiment, authorization to change the emoji may be obtained from an administrator of the datastore. In one embodiment, authorization may be obtained from a user for local emoji, as an administrator for portions of the local datastore, but a system administrator needs to approve changes to shared emoji. Also see (0067] and FIG. 7).
As per claim 9, Ahuja further discloses that the emoji processing method according to claim 8, wherein the obtaining the target application scenario(s) for the custom emoji comprises:
receiving the target application scenario(s) selected by the administrator for the custom emoji ([0075] as shown in FIG. 3, the shared datastore includes portions on the client device and messaging server. Accordingly, upon receiving a copy of the linkable emoji, an administrator may authorize addition of the linkable emoji to the shared datastore), wherein the target application scenario(s) comprises one or more service scenarios; or
determining the target application scenario(s) matched with the custom emoji ([0071] When an embedded message is sent, routine 800 begins a recursion in start loop block 815 to check the message against each emoji in the datastore. In query block 820, routine 800 checks for a match between characters in the message and the stored emoji text pattern. If a match is found, routine 800 swaps the matching emoji with the detected text pattern in execution block 825. In one embodiment, routine 800 inserts the identified emoji into the message. One embodiment parses a message after detection into text portions and emoji portions. Also see [0058]).
As per claim 10, Ahuja further discloses that the emoji processing method according to claim 9, further comprising: displaying an emoji configuration panel that displays an emoji upload entry and at least one service scenario option in response to the administrator triggering an emoji configuration control before receiving the target application scenario(s) selected by the administrator for the custom emoji ([0060] As the weather collection is designated in the collection library bar 1920, available linkable emoji for weather conditions are shown. The collection library bar 1920 and selection panel 1925 may both be accessed to prepare a real-time message shown in communication panel 1910. Accordingly, a message with an embedded linkable emoji is shown in accordance with one embodiment. Also see Fig. 19. [0061] Similarly as shown in Fig. 20, The collection library bar 2020 designates recently used linkable emoji, which are shown in the selection panel 2025. In accordance with one embodiment, a real-time message with multiple embedded linkable emoji is shown in communication panel 2010). Examiner’s Note, examiner reads collection library bar as emoji configuration panel)
As per claim 11, Judovsky further discloses that the emoji processing method according to any one of claim 1, wherein
at least part of emojis comprised in emoji panels that correspond to different service scenarios represent the same meanings; or different emoji(s) represent different meanings in different service scenarios to which the emoji(s) correspond; or same emoji(s) represents different meanings in different service scenarios to which the emoji(s) correspond ([0062] in one embodiment, emoji are suggested to the user based on the context of the message. Another embodiment allows the user to designate what type of link is associated with a particular emoji image. For example, a user may prefer purchase links to information links.
As per electronic device claims 13, and 21-22, these claims include limitations similar to that of method claims 1, and 7-8, respectively, thus the electronic device claims are also rejected under similar citations given to the above method claims.
As per storage medium claim 14, the claim includes limitations similar to that of method claim 1, thus the storage medium claim is also rejected under similar citations given to the method claim 1.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20220215169 A1 is directed to COMBINING MULTIPLE MESSAGES FROM A MESSAGE QUEUE IN ORDER TO PROCESS FOR EMOJI RESPONSES. Disclosed herein are system, method, and computer program product embodiments for providing an untrained machine learning model with a combined message and an appropriate emoji reaction icon. An embodiment operates by receiving a first and second user message. The first and second user comprises a first and second user text or a first and second user-inserted emoji reaction icon, respectively. The first or second user text is associated with a first or second system-specified emoji reaction icon, respectively. Thereafter, the first user message is determined to be associated with the second user message, the combined message is created based on the first and second messages, and the appropriate emoji reaction is identified as associated with the combined message. The combined message and associated appropriate emoji reaction icon are then sent to an untrained machine learning model configured to for training.
US 11044218 B1 Provided are methods and systems for reacting to a message in a group-based communication system using suggested reactive emoji. An exemplary method comprises: displaying a message within an interface of a group-based communication platform; determining a set of suggested reactive emoji, wherein the set of suggested reactive emoji is determined based on at least one adjustable setting associated with at least one of a user identifier and a group identifier; receiving, from the user, an input associated with the message; and in response to receiving the input, displaying a menu of message-related actions in the interface, the menu of message-related actions comprising the set of suggested reactive emoji; receiving a user selection of a reactive emoji from set of suggested reactive emoji; and displaying the selected reactive emoji in association with the message within the interface.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/ Primary Examiner, Art Unit 2174