Prosecution Insights
Last updated: April 19, 2026
Application No. 18/968,003

SMART STICKER SELECTION FOR A MESSAGING SYSTEM

Non-Final OA §103§112§DP
Filed
Dec 04, 2024
Examiner
HUANG, KAYLEE J
Art Unit
2447
Tech Center
2400 — Computer Networks
Assignee
Snap Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
262 granted / 349 resolved
+17.1% vs TC avg
Strong +51% interview lift
Without
With
+51.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
32 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
30.2%
-9.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 349 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communication filed on 12/04/2024. Claims 1-20 present for examination. Information Disclosure Statement It is hereby acknowledged that the following papers have been received and placed of record in the file: Information Disclosure Statement(s) as received on 12/04/2024, 12/04/2024, and 11/14/2025 is/are considered by the Examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1, 4-13, 15-17, 19, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 7, 11-13, 15, 16, and 18-20 of U.S. Patent No. 12,218,892 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the patent anticipate all the claims in the applications. Instant Application Patent (US 12,218,892 B2) A method, the method comprising: identifying a text string in an interaction client that enables a communication message sent from a first user of a first device to a plurality of second devices, the plurality of second devices including a second user of a second device; identifying a root word in the text string based on one or more relevant tags; identifying one or more associated children words in the text string associated with the identified root word; generating a first score for the identified root word; generating a second set of scores for the one or more identified children words; selecting a media overlay of a plurality of media overlays based on the first score and the second set of scores; and recommending the selected media overlay for display within the interaction client. 1. A method, the method comprising: receiving, by a first device, a text string inputted by a second user into chat text of a messaging client, the messaging client enabling communication between the second user of a second device and a first user of the first device; and in response to receiving the text string inputted into the chat text, automatically by the first device: parsing the text string into one or more text portions; determining one or more relevant tags of a plurality of tags based on the one or more parsed text portions; identifying a root word in an individual text portion based on the one or more relevant tags; identifying associated children words for the identified root word, the children words being in the individual text portion; assigning scores for the one or more parsed text portions of the text string by: generating a first score for the identified root word; and generating a second set of scores for the identified children words associated with the identified root word; selecting a media overlay of a plurality of media overlays based on the one or more relevant tags and a plurality of scores, the selecting of the media overlay comprising selecting the media overlay that corresponds to the plurality of scores including (1) the first score corresponding to the identified root word, and (2) the second set of scores corresponding to the identified children words; and displaying the selected media overlay adjacent to the text string within the chat text of the messaging client enabling the first device to send a reply message that includes the selected media overlay to the second device. 4. The method of claim 1, further comprising parsing the text string into one or more text portions, wherein identifying the root word comprises identifying the root word in at least one of the text portions. 1. A method, the method comprising: … parsing the text string into one or more text portions; … identifying a root word in an individual text portion based on the one or more relevant tags; 5.The method of claim 4, further comprising: determining a relevancy score for each of the plurality of media overlays that includes the selected media overlay based on the individual one or more relevant tags and the generated first and second scores for each of the one or more parsed text portions; and ordering the selected media overlay with other media overlays within the interaction client based on the determined relevancy score of each of the plurality of media overlays. 3. The method of claim 1, further comprising: determining a relevancy score for each of the plurality of media overlays that includes the selected media overlay based on the individual one or more relevant tags and the generated first and second scores for each of the one or more parsed text portions; and ordering the selected media overlay with other media overlays within the chat text based on the determined relevancy score of each of the plurality of media overlays. 6. The method of claim 4, wherein determining the one or more relevant tags based on the one or more parsed text portions comprises iteratively scanning through each text portion of the text string. 7. The method of claim 1, wherein determining the one or more relevant tags based on the one or more parsed text portions comprises iteratively scanning through each text portion of the text string. 7. The method of claim 4, wherein the one or more parsed text portions comprise one or more words; and the method further comprises determining one or both of a synonym and an antonym of at least one of the one or more words, wherein determining the one or more relevant tags comprises determining the one or more relevant tags based on one or both of the synonym and the antonym. 11. The method of claim 1, wherein the one or more parsed text portions comprise one or more words; and determining one or both of a synonym and an antonym of at least one of the one or more words, wherein determining the one or more relevant tags comprises determining the one or more relevant tags based on one or both of the synonym and the antonym. 8. The method of claim 4, further comprising determining the one or more relevant tags based on at least one of the parsed text portions. 1. A method, the method comprising: … determining one or more relevant tags of a plurality of tags based on the one or more parsed text portions; 9. The method of claim 1, wherein the second set of scores is generated based on an association with the identified root word. 1. A method, the method comprising: … generating a second set of scores for the identified children words associated with the identified root word; … selecting a media overlay of a plurality of media overlays based on the one or more relevant tags and a plurality of scores, the selecting of the media overlay comprising selecting the media overlay that corresponds to the plurality of scores including (1) the first score corresponding to the identified root word, and (2) the second set of scores corresponding to the identified children words; and 10. The method of claim 1, wherein the media overlay is further selected based on the one or more relevant tags. 1. A method, the method comprising: … selecting a media overlay of a plurality of media overlays based on the one or more relevant tags and a plurality of scores, the selecting of the media overlay comprising selecting the media overlay that corresponds to the plurality of scores including (1) the first score corresponding to the identified root word, and (2) the second set of scores corresponding to the identified children words; and 11. The method of claim 1, wherein the recommendation is for display of the selected media overlay adjacent to the text string within the interaction client. 1. A method, the method comprising: … displaying the selected media overlay adjacent to the text string within the chat text of the messaging client enabling the first device to send a reply message that includes the selected media overlay to the second device. 12. The method of claim 1, wherein the recommendation enables the first device to send a reply message that includes the selected media overlay to the second device. 1. A method, the method comprising: … displaying the selected media overlay adjacent to the text string within the chat text of the messaging client enabling the first device to send a reply message that includes the selected media overlay to the second device. 13. The method of claim 1, wherein the first score for the identified root word and the second set of scores for the identified children words each correspond to an emotion of individual words, the emotion reflecting a mood of the second user, the selected media overlay being selected based on the first score and the second set of scores corresponding to individual emotions. 12. The method of claim 1, wherein the first score for the identified root word and the second set of scores for the identified children words each correspond to an emotion of individual words, the emotion reflecting a mood of the second user, the selected media overlay being selected based on the first score and the second set of scores corresponding to individual emotions. 15. The method of claim 14, wherein at least one of the first score for the identified root word or the second set of scores for the identified children words correspond to an amount of the emotion of the second user, the selected media overlay being selected based on the amount of the emotion. 13. The method of claim 12, wherein at least one of the first score for the identified root word or the second set of scores for the identified children words correspond to an amount of the emotion of the second user, the selected media overlay being selected based on the amount of the emotion. 16. The method of claim 1, wherein the selected media overlay corresponds to a predefined overlay for displaying in a predefined order, wherein the method further includes ranking media overlays that includes the selected media overlay based on a number of relevant tags associated with each media overlay of the media overlays. 16. The computing apparatus of claim 15, wherein the selected media overlay corresponds to a predefined overlay for displaying in a predefined order. 18. The computing apparatus of claim 17, wherein the computing apparatus is further configured to ranking media overlays that includes the selected media overlay based on a number of relevant tags associated with each media overlay of the media overlays. 17. The method of claim 1, further comprising: receiving, via the interaction client, user selection of the selected media overlay among other media overlays; and providing, by the first device, for transmission of the selected media overlay to the second device. 19. The computing apparatus of claim 15, wherein the computing apparatus is further configured to: receiving, via the chat text, user selection of the selected media overlay among other media overlays; and providing the selected media overlay to the second device. 19. A computing apparatus, comprising: at least one processor; and a memory storing instructions that, when executed by the at least one processor, configure the computing apparatus to perform one or more operations comprising: identifying a text string in an interaction client that enables a communication message sent from a first user of a first device to a plurality of second devices, the plurality of second devices including a second user of a second device; identifying a root word in the text string based on one or more relevant tags; identifying one or more associated children words in the text string associated with the identified root word; generating a first score for the identified root word; generating a second set of scores for the one or more identified children words; selecting a media overlay of a plurality of media overlays based on the first score and the second set of scores; and recommending the selected media overlay for display within the interaction client. 15. A computing apparatus, comprising: at least one processor; and a memory storing instructions that, when executed by the at least one processor, configure the computing apparatus to perform one or more operations comprising: receiving a text string inputted by a second user into chat text of a messaging client, the messaging client enabling communication between the second user of a second device and a first user of the computing apparatus; and in response to receiving the text string inputted into the chat text, automatically: parsing the text string into one or more text portions; determining one or more relevant tags of a plurality of tags based on the one or more parsed text portions; identifying a root word in an individual text portion based on the one or more relevant tags; identifying associated children words for the identified root word, the children words being in the individual text portion; assigning scores for the one or more parsed text portions of the text string by: generating a first score for the identified root word; and generating a second set of scores for the identified children words associated with the identified root word; selecting a media overlay of a plurality of media overlays based on the one or more relevant tags and a plurality of scores, the selecting of the media overlay comprising selecting the media overlay that corresponds to the plurality of scores including (1) the first score corresponding to the identified root word, and (2) the second set of scores corresponding to the identified children words; and displaying the selected media overlay adjacent to the text string within the chat text of the messaging client enabling sending of a reply message that includes the selected media overlay to the second device. 20. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform one or more operations comprising: identifying a text string in an interaction client that enables a communication message sent from a first user of a first device to a plurality of second devices, the plurality of second devices including a second user of a second device; identifying a root word in the text string based on one or more relevant tags; identifying one or more associated children words in the text string associated with the identified root word; generating a first score for the identified root word; generating a second set of scores for the one or more identified children words; selecting a media overlay of a plurality of media overlays based on the first score and the second set of scores; and recommending the selected media overlay for display within the interaction client. 20. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform one or more operations comprising: receiving a text string inputted by a second user into chat text of a messaging client, the messaging client enabling communication between the second user of a second device and a first user of the computer; and in response to receiving the text string inputted into the chat text, automatically: parsing the text string into one or more text portions; determining one or more relevant tags of a plurality of tags based on the one or more parsed text portions; identifying a root word in an individual text portion based on the one or more relevant tags; identifying associated children words for the identified root word, the children words being in the individual text portion; assigning scores for the one or more parsed text portions of the text string by: generating a first score for the identified root word; and generating a second set of scores for the identified children words associated with the identified root word; selecting a media overlay of a plurality of media overlays based on the one or more relevant tags and a plurality of scores, the selecting of the media overlay comprising selecting the media overlay that corresponds to the plurality of scores including (1) the first score corresponding to the identified root word, and (2) the second set of scores corresponding to the identified children words; and displaying the selected media overlay adjacent to the text string within the chat text of the messaging client enabling sending of a reply message that includes the selected media overlay to the second device. Claim Objections Claims 1, 4, 5, 8, 13-16, and 19-20 are objected to because of the following informalities: Claim 1, line 9, “the one or more identified children words” should read “the one or more identified associated children words”; Claim 4, lines 2-3, “at least one of the text portions” should read “at least one of the one or more text portions”; Claim 5, lines 3-4, “the generated first and second scores” should read “the generated first score and the generated second set of scores”; Claim 8, line 2, “the parsed text portions” should read “the one or more parsed text portions”; Claim 13, line 2, “the identified children words” should read “the one or more identified associated children words”; Claim 14, line 3, “the second scores” should read “the second set of scores”; Claim 15, line 2, “the identified children words” should read “the one or more identified associated children words”; Claim 16, line 4, “the media overlays” should read “the plurality of media overlays”; Claim 19, line 12, “the one or more identified children words” should read “the one or more identified associated children words”; Claim 20, line 11, “the one or more identified children words” should read “the one or more identified associated children words”; Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2, 5, 13, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 2, claim limitation recites “the message” in line 2, which renders the claim vague and indefinite. It is unclear whether “the message” is referring to “a communication message” in claim 1, line 2, or to “a broadcast message” in claim 2, line 1, or to a different/distinct message. Claim 5 recites the limitation "the individual one or more relevant tags" in line 3. There is insufficient antecedent basis for this limitation in the claim. Regarding claim 13, claim limitation recites “each correspond to an emotion of individual words” in line 2, which renders the claim vague and indefinite. It is unclear what “each” is referring to. The examiner is unable to determine the scope of the claim. Claim 15 recites the limitation "the emotion of the second user" in line 2. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 8, 9, 11, 12, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham (US 2020/0106726 A1) in view of Beitchman et al. (US 10,936,589 B1), hereinafter Beitchman, in view of Mittal (US 2019/0005049 A1), and further in view of Arvapally et al. (US 2016/0239847 A1). Regarding claim 1, Pham discloses A method, the method comprising: identifying a text string (message) in an interaction client that enables a communication message sent from a first user of a first device to a plurality of second devices, the plurality of second devices including a second user of a second device ([0109]: if the messaging application is an instant messaging application, a message may be received as part of an instant messaging communication between the particular user 125a and one or more other users 125, e.g., in a messaging session (e.g., chat) having two participants, in a messaging session (e.g., chat) having two participants, in a group messaging session that includes more than two participants, etc.; & [0111]: a message is received, which has been sent from a first user to a second user over a communication network); selecting a media overlay of a plurality of media overlays (a list of message stickers) based on score ([0114]: user input is provided to the second user device to command a display of a user interface that displays a list of message stickers; some or all of the one or more sticker suggestions are commanded by the system to be displayed automatically in response to receiving the message; & [0132]: the message sticker having the highest similarity to a suggested response is selected for that suggested response, e.g., based on the similarity score); and recommending the selected media overlay for display within the interaction client (FIG. 8 & [0024]: one or more message stickers are identified based at least in part on the semantic concept, and the one or more message stickers are displayed in a user interface on the second device as suggested responses selected by the second user; & [0109]: suggested message sticker may be generated and provided to the particular user automatically, upon consent from the particular user and one or more other users that sent and/or received the image; & [0119]: one or more suggested responses are determined or generated based on the received message; & [0189]: user interface with suggested responses that include one or more message stickers; in the example shown in FIG. 8, a first user sends a message “Hi?” to a second user device). Pham does not explicitly disclose identifying a root word in the text string based on one or more relevant tags. However, Beitchman discloses identifying a root word in the text string based on one or more relevant tags (Col. 17, lines : 27-31: the capability information may be submitted as human-readable text which may be parsed to identify key words or values that can be specified according to predefined fields (e.g., locations) or identifiers (e.g., tags)). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Beitchamn in Pham because Pham discloses analyze message to determine a semantic concept associated with the message (abstract) and Beitchman further suggests identify key words based on tags/identifiers (Col. 17, lines 27-31). One of ordinary skill in the art would be motivated to utilize the teachings of Beitchman in Pham system in order to accurately identify key words. Pham and Beitchman do not explicitly disclose identifying one or more associated children words in the text string associated with the identified root word. However, Mittal discloses identifying a root word ([0434]: obtains a search term; & [0445]: parses the given text to identify text vectors making up the text; each text vector represents an ordered set of words); identifying one or more associated children words associated with the identified root word ([0435]: identify one or more related words that are strongly associated with the search term within a corpus; & [0445]: parses the given text to identify text vectors making up the text; each text vector represents an ordered set of words; & [0455]: determines pairs of related words within the given text vector). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Mittal in Pham and Beitchman because Pham and Beitchman disclose analyze message to determine a semantic concept associated with the message (Pham: abstract) and Mittal further suggests determine score for word pair ([0457]). One of ordinary skill in the art would be motivated to utilize the teachings of Mittal in Pham, and Beitchman system in order to provide contextually relevant results as suggested by Mittal ([0032]). Pham, Beitchman, and Mittal do not explicitly disclose generating a first score for the identified root word; generating a second set of scores for the one or more identified children words. However, Arvapally discloses generating a first score for the identified root word ([0050]: while the number of occurrences of the words in the message is directly indicative of the word scores for the message, other criteria is used to assign scores, per word); generating a second set of scores for the one or more identified children words ([0050]: while the number of occurrences of the words in the message is directly indicative of the word scores for the message, other criteria is used to assign scores, per word). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Arvapally in Pham, Beitchman, and Mittal because Pham, Beitchman, and Mittal disclose analyze message to determine a semantic concept associated with the message (Pham: abstract) and Arvapally further suggests determine word scores for the message ([0050]). One of ordinary skill in the art would be motivated to utilize the teachings of Arvapally in Pham, Beitchman, and Mittal system in order to accurately analyze the message by determining weight of each word. Regarding claim 2, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the communication message includes a broadcast message that initiates a broadcast of the message to the plurality of the second devices ([0109]: if the messaging application is an instant messaging application, a message may be received as part of an instant messaging communication between the particular user 125a and one or more other users 125, e.g., in a messaging session (e.g., chat) having two participants, in a messaging session (e.g., chat) having two participants, in a group messaging session that includes more than two participants, etc.). Regarding claim 3, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the selected media overlay includes an emoji ([0062]: suggestions, e.g., suggested responses, may include one or more of: text (e.g., “Terrific!”), emoji (e.g., a smiley face, a sleepy face, etc.), images (e.g., photos from a user’s photo library), text generated based on templates with user data inserted in a field of the template (e.g., “her number is <Phone Number>” where the field “Phone Number” is filled in based on user data, if the user provides access to user data), links (e.g., Uniform Resource Locators), message stickers, etc.). Regarding claim 4, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses parsing the text string into one or more text portions, wherein identifying the root word comprises identifying the root word in at least one of the text portions (Pham: [0120]: words in the message can be parsed or extracted and provided as concepts, used to determine responses using defined relationships; & Beitchman: Col. 17, lines : 27-31: the capability information may be submitted as human-readable text which may be parsed to identify key words or values that can be specified according to predefined fields (e.g., locations) or identifiers (e.g., tags)). Therefore, the limitations of claim 4 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis. Regarding claim 8, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 4. Pham further discloses determining the one or more relevant tags (concepts/descriptor) based at least one of the parsed text portions ([0120]: words in the message can be parsed or extracted and provided as concepts; & [0027]: semantic concepts can be identified for the message stickers, by obtaining descriptors associated with the standardized message stickers). Regarding claim 9, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the second set of scores is generated based on an association with the identified root word (Arvapally: [0050]: while the number of occurrences of the words in the message is directly indicative of the word scores for the message, other criteria is used to assign scores, per word)). Therefore, the limitations of claim 9 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis. Regarding claim 11, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the recommendation is for display of the selected media overlay adjacent to the text string within the interaction client (FIG. 8 & [0024]: one or more message stickers are identified based at least in part on the semantic concept, and the one or more message stickers are displayed in a user interface on the second device as suggested responses selected by the second user; & [0109]: suggested message sticker may be generated and provided to the particular user automatically, upon consent from the particular user and one or more other users that sent and/or received the image; & [0119]: one or more suggested responses are determined or generated based on the received message; & [0189]: user interface with suggested responses that include one or more message stickers; in the example shown in FIG. 8, a first user sends a message “Hi?” to a second user device). Regarding claim 12, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the recommendation enables the first device to send a reply message that includes the selected media overlay to the second device (FIG. 8 & [0024]: one or more message stickers are identified based at least in part on the semantic concept, and the one or more message stickers are displayed in a user interface on the second device as suggested responses selected by the second user; & [0109]: suggested message sticker may be generated and provided to the particular user automatically, upon consent from the particular user and one or more other users that sent and/or received the image; & [0119]: one or more suggested responses are determined or generated based on the received message; & [0189]: user interface with suggested responses that include one or more message stickers; in the example shown in FIG. 8, a first user sends a message “Hi?” to a second user device). Regarding claim 17, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses receiving, via the interaction client, user selection of the selected media overlay among other media overlays ([0138]: a selection of one or more of the displayed suggested message stickers is received); and providing, by the first device, for transmission of the selected media overlay to the second device ([0139]: the selected suggested message sticker(s) are output as one or more messages to one or more recipient devices; a message including the selected message sticker can be sent over the network to one or more other client devices via messaging server and/or directly to the other client devices). Regarding claim 18, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the text string comprises words and at least one emoji ([0111]: the message is a text message, an image, a video, audio data for an audio message, etc.; & [0120]: words in the message can be parsed or extracted; & {0084]: a message sticker is received, which has been sent from a first user of a first device to a second user of a second device, e.g., over a communication network; the message sticker may be an image, e.g., a static image (e.g., a photograph, an emoji, or other image), a cinemagraph or animated image (e.g., an image that includes motion, a sticker that includes animation and audio, etc.), a video, audio data for an audio message, etc.). Regarding claims 19 and 20, the limitations of claims 19 and 20 are rejected in the analysis of claim 1 above and these claims are rejected on that basis. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, and further in view of Veeramuthu et al. (US 11,405,349 B1), hereinafter Veeramuthu. Regarding claim 5, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 4. Pham further discloses determining a relevancy score for each of the plurality of media overlays that includes the selected media overlay based on the individual one or more relevant tags ([0136]: confidence scores are used in determining correspondence between suggested responses and message stickers); and ordering the selected media overlay with other media overlays within the interaction client based on the determined relevancy score of each of the plurality of media overlays ([0136]: these scores are used to rank the message stickers based on the confidence of the correspondence between the message stickers and the suggested responses). Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose determining a relevancy score for each media overlay based on the generated first and second scores for each of the one or more parsed text portions. However, Veeramuthu discloses determining a relevancy score for each media overlay based on the one or more relevant tags and the generated first and second scores for each of the one or more parsed text portions (claim 4: the content score is based on determined emotion of the message and a determined influence score of the message). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Veeramuthu in Pham, Beitchman, Mittal, and Arvapally because Pham, Beitchman, Mittal, and Arvapally disclose analyze message to determine a semantic concept associated with the message (Pham: abstract) and Veeramuthu further suggests determine content score based on determined emotion of the message and a determined influence score of the message (claim 4). One of ordinary skill in the art would be motivated to utilize the teachings of Veeramuthu in Pham, Beitchman, Mittal, and Arvapally system in order to provide the suitable/relevant content to user. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view Beitchman, in view of Mittal, in view of Arvapally, and further in view of Cypes et al. (US 2011/0225250 A1), hereinafter Cypes. Regarding claim 6, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 4. Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose determining the one or more relevant tags based on the one or more parsed text portions comprises iteratively scanning through each text portion of the text string. However, Cypes discloses determining the one or more relevant tags based on the one or more parsed text portions comprises iteratively scanning through each text portion of the text string ([0050]: iteratively scan and index over the textual content in the message). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Cypes in Pham, Beitchman, Mittal, and Arvapally because Pham, Beitchman, Mittal, and Arvapally disclose analyze message to determine a semantic concept associated with the message (Pham: abstract) and Cypes further suggests iteratively scan the textual content in the message ([0050]). One of ordinary skill in the art would be motivated to utilize the teachings of Cypes in Pham, Beitchman, Mittal, and Arvapally system in order to ensure the system retrieve proper information by iteratively scanning the message. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, and further in view of Dolph et al. (US 2019/0121608 A1), hereinafter Dolph. Regarding claim 7, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 4. Pham further discloses the one or more parsed text portions comprise one or more words ([0120]: words in the message). Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose determining one or both of a synonym and an antonym of at least one of the one or more words, wherein determining the one or more relevant tags comprises determining the one or more relevant tags based on one or both of the synonym and the antonym. However, Dolph discloses determining one or both of a synonym and an antonym of at least one of the one or more words, wherein determining the one or more relevant tags comprises determining the one or more relevant tags based on one or both of the synonym and the antonym ([0007]: determining synonyms to the select word or phrase with the natural language processing system, and generating a select object synonym data structure comprising the select word or phrase and the synonyms to the select word or phrase associated with the select object; & [0103]: determine synonyms to the analyzed word or phrase). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Dolph in Pham, Beitchman, Mittal, and Arvapally because Pham, Beitchman, Mittal, and Arvapally disclose parse words in message (Pham: [0120]) and Dolph further suggests determine synonyms to the select word or phrase ([0007]). One of ordinary skill in the art would be motivated to utilize the teachings of Dolph in Pham, Beitchman, Mittal, and Arvapally system in order to expand the analysis of message with words that have same meanings. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, and further in view of Rai (US 2021/0359963 A1). Regarding claim 10, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose the media overlay is further selected based on the one or more relevant tags. However, Rai discloses the media overlay is further selected based on the one or more relevant tags (claim 15: determine the at least one emoji based a dictionary mapped between a hashtag in the text to at least one other related emoji). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Rai in Pham, Beitchman, Mittal, and Arvapally because Pham, Beitchman, Mittal, and Arvapally disclose displaying sticker suggestions to user (Pham: [0114]) and Rai further suggests determine at least one emoji based on a dictionary mapped between a hashtag in the text to at least one other related emoji (claim 15). One of ordinary skill in the art would be motivated to utilize the teachings of Rai in Pham, Beitchman, Mittal, and Arvapally system in order to provide relevant emoji to user. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, and further in view of Leydon et al. (US 9,043,196 B1), hereinafter Leydon. Regarding claim 13, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose the first score for the identified root word and the second set of scores for the identified children words each correspond to an emotion of individual words, the emotion reflecting a mood of the second user, the selected media overlay being selected based on the first score and the second set of scores corresponding to individual emotions. However, Leydon discloses the first score for the identified root word and the second set of scores for the identified children words each correspond to an emotion of individual words, the emotion reflecting a mood of the second user, the selected media overlay being selected based on the first score and the second set of scores corresponding to individual emotions (Col. 23, lines 13-40: one or more candidate emoticons are identified, wherein each candidate emoticon is associated with a respective score (e.g., a numerical value) indicating relevance of the candidate emoticon to the text and the sentiment; one or more candidate emoticons that have respective highest scores are then provided for user selection; a selection from the user of one or more of the provided emoticons is then received, the one or more selected emoticons is inserted into the text field at the current position of the input cursor or in proximity to the current position). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Leydon in Pham, Beitchman, Mittal, and Arvapally, because Pham, Beitchman, Mittal, and Arvapally disclose displaying sticker suggestions to user (Pham: [0114]) and Leydon further suggests identifying one or more candidate emoticons, and each candidate emoticon is associated with a respective score indicating relevance of the candidate emoticon to the text and the sentiment (Col. 24, lines 13-40). One of ordinary skill in the art would be motivated to utilize the teachings of Leydon in Pham, Beitchman, Mittal, and Arvapally system in order to provide more relevant sticker suggestions to users based on sentiment of text in a text field. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, and further in view of Moon et al. (US 2023/0064599 A1), hereinafter Moon. Regarding claim 14, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses selecting the media overlay of the plurality of media overlays comprises selecting the media overlay of the set of media overlays ([0114]: user input is provided to the second user device to command a display of a user interface that displays a list of message stickers; some or all of the one or more sticker suggestions are commanded by the system to be displayed automatically in response to receiving the message; & [0132]: the message sticker having the highest similarity to a suggested response is selected for that suggested response, e.g., based on the similarity score). Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose selecting a set of media overlays includes by identifying a plurality of media overlays selected based on the first score and the second scores corresponding to individual emotions, the plurality of media overlays including the selected media overlay, and randomly selecting the set of media overlays from the plurality of media overlays. However, Moon discloses selecting a set of media overlays includes by identifying a plurality of media overlays selected based on the first score and the second scores corresponding to individual emotions, the plurality of media overlays including the selected media overlay, and randomly selecting the set of media overlays from the plurality of media overlays, wherein selecting the media overlay of the plurality of media overlays comprises selecting the media overlay of the set of media overlays ([0101]: the electronic device may determine a recommended animation effect corresponding to the emotion score; & [0088]: the electronic device may randomly determine one of the plurality of animation representations as a fourth recommended animation representation; the electronic device may randomly select a fourth recommended animation representation from among animation representations other than the first recommended animation representation, the second recommended animation representation, and the third recommended animation representation among the plurality of animation representations). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Moon in Pham, Beitchman, Mittal, and Arvapally because Pham, Beitchman, Mittal, and Arvapally disclose displaying sticker suggestions to user (Pham: [0114]) and Moon further suggests selecting a recommended animation representation corresponding to emotion score ([0101]). One of ordinary skill in the art would be motivated to utilize the teachings of Moon in Pham, Beitchman, Mittal, and Arvapally system in order to provide more relevant sticker suggestions to users. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, in view of Moon, and further in view of Leydon. Regarding claim 15, Pham, Beitchman, Mittal, Arvapally, and Moon disclose the method of claim 14. Pham, Beitchman, Mittal, Arvapally, and Moon do not explicitly disclose at least one of the first score for the identified root word or the second set of scores for the identified children words correspond to an amount of the emotion of the second user, the selected media overlay being selected based on the amount of the emotion. However, Leydon discloses at least one of the first score for the identified root word or the second set of scores for the identified children words correspond to an amount of the emotion of the second user, the selected media overlay being selected based on the amount of the emotion (Col. 23, lines 13-40: one or more candidate emoticons are identified, wherein each candidate emoticon is associated with a respective score (e.g., a numerical value) indicating relevance of the candidate emoticon to the text and the sentiment; one or more candidate emoticons that have respective highest scores are then provided for user selection; a selection from the user of one or more of the provided emoticons is then received, the one or more selected emoticons is inserted into the text field at the current position of the input cursor or in proximity to the current position). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Leydon in Pham, Beitchman, Mittal, Arvapally, and Moon because Pham, Beitchman, Mittal, Arvapally, and Moon disclose displaying sticker suggestions to user (Pham: [0114]) and Leydon further suggests identifying one or more candidate emoticons, and each candidate emoticon is associated with a respective score indicating relevance of the candidate emoticon to the text and the sentiment (Col. 24, lines 13-40). One of ordinary skill in the art would be motivated to utilize the teachings of Leydon in Pham, Beitchman, Mittal, Arvapally, and Moon system in order to provide more relevant sticker suggestions to users based on sentiment of text in a text field. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pham in view of Beitchman, in view of Mittal, in view of Arvapally, and further in view of Fuzell-Casey (US 2014/0282237 A1). Regarding claim 16, Pham, Beitchman, Mittal, and Arvapally disclose the method as described in claim 1. Pham further discloses the selected media overlay corresponds to a predefined overlay for displaying in a predefined order ([0035]: stickers are also based on themes; & [0036]: a message sticker can be included in a group related message stickers). Pham, Beitchman, Mittal, and Arvapally do not explicitly disclose the method further includes ranking media overlays that includes the selected media overlay based on a number of relevant tags associated with each media overlay of the media overlays. However, Fuzell-Casey disclose the method further includes ranking media overlays that includes the selected media overlay based on a number of relevant tags associated with each media overlay of the media overlays ([0056]: the identified content is ranked based on a number of search mood identifiers that corresponds to the identified content). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of Fuzell-Casey in Pham, Beitchman, Mittal, and Arvapally because Pham, Beitchman, Mittal, and Arvapally disclose rank suggested message stickers (Pham: [0129]) and Fuzell-Casey further suggests rank identified content based on a number of identifiers that corresponds to the content ([0056]). One of ordinary skill in the art would be motivated to utilize the teachings of Fuzell-Casey in Pham, Beitchman, Mittal, and Arvapally system in order to provide relevant ranking of stickers to users. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zonka (US 2015/0334067 A1). Identify any tags included in the message and reconciles or translates the tags to particular emoticons and/or audio files ([0028]). Zhang et al. (US 2014/0046976 A1). Match the terms in the input text content with the terms in the concept definition, and determine a relevance score based on the number of terms in the text content that match the terms in the concept definition, or based on other calculation methods such as based on the importance scores of the terms in the text content or in the concept definition dataset ([0045]). Kirk (US 2015/0058103 A1). Calculate a sentiment value reflecting user sentiment for the subject or category of the content, based on the sentiment score of individual terms ([0095]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYLEE J HUANG whose telephone number is (571)272-0080. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H Hwang can be reached on 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Kaylee Huang 02/27/2026 /KAYLEE J HUANG/Primary Examiner, Art Unit 2447
Read full office action

Prosecution Timeline

Dec 04, 2024
Application Filed
Feb 28, 2026
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603902
APPARATUS AND METHOD FOR CONSTRUCTING INTRUSION DETECTION SYSTEM APPLIED TO CAN COMMUNICATION USING DETECTION POLICY RULE
2y 5m to grant Granted Apr 14, 2026
Patent 12568038
DYNAMIC ANYCAST CLIENT ROUTING AND HEALTH MANAGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12562933
Limited Communications Threads Associated with Construction Based Data Objects
2y 5m to grant Granted Feb 24, 2026
Patent 12556574
USING CROSS WORKLOADS SIGNALS TO REMEDIATE PASSWORD SPRAYING ATTACKS
2y 5m to grant Granted Feb 17, 2026
Patent 12554878
PHONE NUMBER OBFUSCATION IN SOCIAL MEDIA PLATFORMS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+51.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 349 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month