Prosecution Insights
Last updated: April 19, 2026
Application No. 18/520,372

USER INTERFACES AND TECHNIQUES FOR EDITING, CREATING, AND USING STICKERS

Final Rejection §103
Filed
Nov 27, 2023
Examiner
ARMOUCHE, HADI S
Art Unit
2409
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
4y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
230 granted / 333 resolved
+11.1% vs TC avg
Strong +22% interview lift
Without
With
+22.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
5 currently pending
Career history
338
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 333 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a final office action. Claims 73-98 were considered. Response to Amendment This action is in response to communication filed on 10/14/2025. a. Claims 73, 77-81 and 83-98 are pending in this application. b. Claims 73, 94, 96, 97 and 98 has been amended. c. Claim 1-72 and 74-76 were previously canceled. Claims 82 has been canceled. Response to Arguments Regarding Claim Rejections – 35 USC § 103 Applicant's arguments, see page 12-17 of REMARKS, filed on 10/14/2025, with respect to Claim Rejections - 35 USC § 103 have been fully considered. Applicant’s arguments with respect to claim(s) 73, 94, 96 and 97-98 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Additionally, applicant argues in substance that: a. “During the examiner interview, Examiner Khakural acknowledged that the cited art fails to disclose each element of independent claim 73. Applicant agrees. For example, none of the cited references teach, suggest, or otherwise disclose, "in response to detecting the selection of the visual effect option, displaying the respective sticker having the selected visual effect, wherein the selected visual effect is a visual effect selected from a group consisting of: an outline effect, a three-dimensional effect, an iridescence effect, and a hand-drawn effect," (emphasis added) as recited in independent claim 73.” (see remarks, page 13) Regarding remarks that “none of the cited references teach, suggest, or otherwise disclose… selected visual effect is a visual effect selected from a group consisting of: an outline effect, a three-dimensional effect, an iridescence effect, and a hand-drawn effect”, examiner disagrees. After further consideration, examiner would like to point out that above limitation uses language “visual effect selected from a group consisting of” which requires one effect selected from the group (i.e. at least one effect from the Markush group). As described in rejection below Song reference discloses selected visual effect can be a three-dimensional effect and/or a hand-drawn effect in Col 7, 40-49 and figs. (3, 4) which satisfies at least one effect selected from the Markush group. Therefore, Song reference teaches above argued limitation. A Markush grouping is a closed group of alternatives, i.e., the selection is made from a group "consisting of" (rather than "comprising" or "including") the alternative members. Abbott Labs., 334 F.3d at 1280, 67 USPQ2d at 1196. See MPEP 2173.05(h). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 73, 77-80, 83-85, 88-93, 95, and 97-98 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (US 10269164 B1, hereinafter Song) in view of Van Os et al. (US 10375313 B1, hereinafter Van) further in view of Anzures et al. (US 2018/0335927 A1, hereinafter Anzures). Regarding claim 73, Song teaches a computer system configured to communicate with a display generation component and one or more input devices (Fig. 1(110)), comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors (Fig. 1(115)), the one or more programs including instructions for: receiving, via the one or more input devices, a request to generate a sticker (Fig. 5(501) and [Col 8, 55-57]: The messaging application 112 receives 505 a request from a user of the messaging system 130 to create a custom sticker.); in response to receiving the request to generate a sticker, initiating a process for generating a sticker ([Col 7, 3-6]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera (i.e. capturing image is the process for generating a sticker)), including: displaying (Fig. 3(301)), via the display generation component: a media item that includes a representation of a respective object ([Col 7, 6-19]: The interface 301 additionally includes a text instruction 310 to take a photo or to select an image from a local saved images store on the mobile device. The image is used as a base image for the custom sticker creation process.), and an indication that the respective object is selected for generating a sticker ([Col 7, 19-22]: The messaging application 112 identifies the image source as the back facing camera of the client device 110 and provides an interface 302 allowing a user to provide input indicating a subject of the custom sticker.), wherein the respective object is automatically selected in the media item without input from a user of the computer system ([col 4, 60-67]: If the image source is identified as the front facing camera of the client device 110, an embodiment of the cropping module 215 activates an automatic mode that automatically applies a facial recognition process to the image to identify the subject for the custom sticker. The facial recognition process recognizes a human face contained within the image and automatically selects the area of the image containing the face as the subject for the custom sticker.); and generating a respective sticker having an appearance based on the representation of the respective object that was automatically selected in the media item (Fig. 5(525) and [Col 6, 4-11]: The custom sticker creation module 225 receives the image and the associated data and processes the image to create a custom sticker. In one embodiment, the custom sticker creation module 225 crops the image according to the cropping data to create a cropped image. The cropped image includes only the portion of the image constituting the subject.), displaying, via the display generation component, a plurality of stickers, wherein the plurality of stickers includes a representation of the respective sticker having the appearance based on the representation of the respective object that was automatically selected in the media item (Fig. 3(385) and [Col 7, 55-60]: The custom sticker 375 is then accessible via a custom sticker interface 304 in the messaging application 112. The user of the messaging application 112 can select the custom sticker 375 to send it to other users of the messaging system 130 on messaging threads 380 (i.e. fig. 3(375) shows two stickers of cars generated from the picture)); while displaying the plurality of stickers that includes the respective sticker that was generated having the appearance based on the representation of the respective object that was automatically selected in the media item ([Col 7, 18-22]: The image is used as a base image for the custom sticker creation process. The messaging application 112 identifies the image source as the back facing camera of the client device 110 and provides an interface 302 allowing a user to provide input indicating a subject of the custom sticker.), receiving a request to edit one or more characteristics of the respective sticker ([Col 7, 3-6]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. [Col 7, 60-64]: As shown in FIG. 3, the custom sticker interface 304 additionally includes an option 385 to create one or more additional custom stickers. In one embodiment, the custom sticker interface 304 allows users to modify previously created custom stickers.); in response to receiving the request to edit one or more characteristics of the respective sticker, displaying a sticker editing interface that includes a visual effect option that is selectable to apply a visual effect to the respective sticker ([Col 7, 18-45]: The image is used as a base image for the custom sticker creation process. The messaging application 112 identifies the image source as the back facing camera of the client device 110 and provides an interface 302 allowing a user to provide input indicating a subject of the custom sticker. The interface 303 shows the custom content on a drawing layer above the cropping layer and the image. The UI includes icons 355 allowing the user to select between emojis, text, and drawing. Each icon in the interface 303 is associated with additional options to customize the drawing process (i.e. displaying the interface to include visual effect)); detecting, via the one or more input devices, a selection of the visual effect option ([Col 8, 31-38]: As described above in conjunction with FIG. 3, the drawing mode interface 403 allows users to add custom content to the custom sticker, including emojis, text, and drawings. Upon receiving an indication from the user that the custom content step is complete, the messaging application 112 crops, resizes, and stores the image as a custom sticker 425 accessible through the messaging interface.); and in response to detecting the selection of the visual effect option, displaying the respective sticker having the selected visual effect (Fig. 4(404) and [Col 8, 40-41]: The custom sticker 425 is then accessible via a custom sticker interface 404 in the messaging application 112 (i.e. displaying the sticker having selected LOL effect)), wherein the selected visual effect is a visual effect selected from a group consisting of: an outline effect, a three-dimensional effect ([Col 7, 50-55]: Upon receiving an indication from the user that the custom content step is complete, the messaging application 112 crops, resizes, and stores the image as a custom sticker 375 accessible through the messaging interface (fig. 3(375) and fig. 4(475) shows the sticker having 3D visual effect)), an iridescence effect, and a hand-drawn effect ([Col 7, 40-49]: The interface 303 shows the custom content on a drawing layer above the cropping layer and the image. The UI includes icons 355 allowing the user to select between emojis, text, and drawing. Each icon in the interface 303 is associated with additional options to customize the drawing process (i.e. visual effect selected can be hand drawing effect created using icon fig. 3(355)). Song however does not teach an options menu that is different from the media item and includes a graphical user interface object that is selectable to generate a sticker based on the media item; detecting, via the one or more input devices, a set of one or more inputs that includes an input directed to the options menu; in response to detecting the set of one or more inputs that includes an input directed to the options menu: in accordance with a determination that the set of one or more inputs includes a selection of the graphical user interface object, generating a respective sticker; and in accordance with a determination that the set of one or more inputs does not include a selection of the graphical user interface object, forgoing generating the respective sticker having an appearance based on the representation of the respective object that was automatically selected in the media item. Van teaches displaying an options menu that is different from the media item ([Col 40, 44-46]: In FIG. 6G, in response to detecting input 626, device 600 displays avatar options menu 628 with a scrollable listing of avatar options 630 (i.e. Fig. 6G(628) is a menu different from the image displayed)) and includes a graphical user interface object that is selectable to generate a sticker based on the media item ([Col 40, 46-50]: Avatar options menu 628 also includes selection region 629 for indicating a selected one of avatar options 630. As shown in FIG. 6G, robot avatar option 630-3 is positioned in selection region 629, which indicates robot avatar option 630-1 is selected. [Col 45, 40-44]: In response to detecting input 684 (e.g. tap gesture on display 601) on done affordance 618, device 600 displays representation of media item 685 in message-compose field 608 of messaging user interface 603, as shown in FIG. 6AO. (i.e. the menu includes the options of avatars that can be used to generate sticker/representation to send via message)); detecting, via the one or more input devices, a set of one or more inputs that includes an input directed to the options menu ([Col 50, 46-54]: The electronic device (e.g., 600) detects, via the one or more input devices, selection (e.g., 626) of a second one of the effects option affordances (e.g., avatar affordance 624-1). In some embodiments, in response to detecting selection of the second one of the effects option affordances, the electronic device ceases to display the plurality of effects option affordances (e.g., 624) and displays an avatar selection region (e.g., avatar menu 628) having a plurality of avatar affordances); in response to detecting the set of one or more inputs that includes an input directed to the options menu: in accordance with a determination that the set of one or more inputs includes a selection of the graphical user interface object, generating a respective sticker ([Col 45, 40-44]: In response to detecting input 684 (e.g. tap gesture on display 601) on done affordance 618, device 600 displays representation of media item 685 in message-compose field 608 of messaging user interface 603, as shown in FIG. 6AO. (i.e. generate sticker/representation to send via message)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song to incorporate the teachings of Van and an options menu that is different from the media item and includes a graphical user interface object that is selectable to generate a sticker based on the media item; detecting, via the one or more input devices, a set of one or more inputs that includes an input directed to the options menu; in response to detecting the set of one or more inputs that includes an input directed to the options menu: in accordance with a determination that the set of one or more inputs includes a selection of the graphical user interface object, generating a respective sticker. One of ordinary skilled in the art would have been motivated to combine the teachings in order for displaying visual effects in a messaging application (Van, [Col 37, 27-28]). Song in view of Van however does not teach in accordance with a determination that the set of one or more inputs does not include a selection of the graphical user interface object, forgoing generating the respective sticker having an appearance based on the representation of the respective object that was automatically selected in the media item. Anzures teaches in accordance with a determination that the set of one or more inputs does not include a selection of the graphical user interface object, forgoing generating the respective sticker having an appearance based on the representation of the respective object that was automatically selected in the media item ([252]: Copy button 726 copies animated virtual avatar 700 to a clipboard of device 600. Save button 728 saves animated virtual avatar 700 to device 600 (e.g., to a database or library that can be later access by applications installed on device 600). More button 730 displays additional operations that can be performed with respect to animated virtual avatar 700 (i.e. in response to input directing to save button in menu, saving the avatar to the device. Here, saving the avatar is foregoing generation of sticker)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song in view of Van to incorporate the teachings of Anzures and in accordance with a determination that the set of one or more inputs does not include a selection of the graphical user interface object, forgoing generating the respective sticker having an appearance based on the representation of the respective object in the media item. One of ordinary skilled in the art would have been motivated to combine the teachings in order to perform additional operations with respect to animated virtual avatar (Anzures, [252]). Regarding claim 77, Song in view of Van and Anzures teaches the computer system of claim 73. Song teaches the one or more programs further including instructions for: in response to detecting the set of one or more inputs: in accordance with a determination that the set of one or more inputs includes a selection of the graphical user interface object and a representation of a second object in the media item is selected for generating the respective sticker ([Col 8, 13-16]: Upon receiving an image from the front facing camera, the messaging application 112 generates the UI illustrated in an interface 402 for selecting, customizing, and processing the image to create a custom sticker.), generating a second sticker having an appearance based on the representation of the second object in the media item ([Col 8, 34-38]: Upon receiving an indication from the user that the custom content step is complete, the messaging application 112 crops, resizes, and stores the image as a custom sticker 425 accessible through the messaging interface (i.e. generating sticker have appearance of second object in media item)). Van teaches one or more inputs that includes an input directed to the options menu ([Col 50, 46-54]: The electronic device (e.g., 600) detects, via the one or more input devices, selection (e.g., 626) of a second one of the effects option affordances (e.g., avatar affordance 624-1). In some embodiments, in response to detecting selection of the second one of the effects option affordances, the electronic device ceases to display the plurality of effects option affordances (e.g., 624) and displays an avatar selection region (e.g., avatar menu 628) having a plurality of avatar affordances (i.e. input is directed to the menu 628)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song in view of Van and Anzures to incorporate the teachings of Van and one or more inputs that includes an input directed to the options menu. One of ordinary skilled in the art would have been motivated to combine the teachings in order for displaying visual effects in a messaging application (Van, [Col 37, 27-28]). Regarding claim 78, Song in view of Van and Anzures teaches the computer system of claim 77. Song teaches wherein the representation of the second object is a portion of the representation of the respective object ([Col 8, 34-41]: Upon receiving an indication from the user that the custom content step is complete, the messaging application 112 crops, resizes, and stores the image as a custom sticker 425 accessible through the messaging interface. The custom sticker 425 is then accessible via a custom sticker interface 404 in the messaging application 112 (i.e. second object is representation of object in media item)). Regarding claim 79, Song in view of Van and Anzures teaches the computer system of claim 77. Song teaches wherein the representation of the second object is different from the representation of the respective object ([Col 8, 34-41]: Upon receiving an indication from the user that the custom content step is complete, the messaging application 112 crops, resizes, and stores the image as a custom sticker 425 accessible through the messaging interface. The custom sticker 425 is then accessible via a custom sticker interface 404 in the messaging application 112 (i.e. second object is representation of object in media item)) (i.e. representation of second object as in fig. 4(475) is different from representation of first object fig. 4(car sticker))). Regarding claim 80, Song in view of Van and Anzures teaches the computer system of claim 73. Song teaches wherein the graphical user interface object that is selectable to generate a sticker based on the media item is displayed in response to detecting one or more inputs directed to the media item ([Col 5, 4-9]: The cropping module 215 displays the image to the user and uses dashed lines or other signifiers to illustrate the portions of the image automatically identified as the subject for the custom sticker. The user may then interact with the UI provided by the cropping module 215 to select the automatically-identified portions of the image. [Col 6, 4-6]: The custom sticker creation module 225 receives the image and the associated data and processes the image to create a custom sticker (i.e. generating sticker based on user input selecting portions of the image)). Regarding claim 83, Song in view of Van and Anzures teaches the computer system of claim 73. Song teaches wherein the media item is comprised of a plurality of image frames ([Col 7, 3-8]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The interface 301 additionally includes a text instruction 310 to take a photo or to select an image from a local saved images store on the mobile device (i.e. Fig. 3(315) shows that media item containing multiple images)). Regarding claim 84, Song in view of Van and Anzures teaches the computer system of claim 83. Song teaches wherein the respective sticker includes an animation based on the plurality of image frames comprising the media item ([Col 8, 27-34]: When a selected area for the subject of the custom sticker is identified, the messaging application 112 provides an interface 403 that applies the partially-opaque mask 420 to indicate the non-selected area of the custom sticker and initiates a drawing mode. As described above in conjunction with FIG. 3, the drawing mode interface 403 allows users to add custom content to the custom sticker, including emojis, text, and drawings (i.e. sticker including animation like shown “LOL” in fig. 4(403))). Regarding claim 85, Song in view of Van and Anzures teaches the computer system of claim 84. Song teaches wherein the animation based on the plurality of image frames comprising the media item is automatically generated ([Col 8, 16-21]: The automatic facial recognition process identifies a face as the subject of the custom sticker. In one example, the identified area is first indicated using an outline 410. [Col 8, 27-34]: When a selected area for the subject of the custom sticker is identified, the messaging application 112 provides an interface 403 that applies the partially-opaque mask 420 to indicate the non-selected area of the custom sticker and initiates a drawing mode. As described above in conjunction with FIG. 3, the drawing mode interface 403 allows users to add custom content to the custom sticker, including emojis, text, and drawings (i.e. sticker including animation like shown “LOL” in fig. 4(403, 404) that is in automatically generated face sticker)). Regarding claim 88, Song in view of Van and Anzures teaches the computer system of claim 73. Song teaches the one or more programs further including instruction for: displaying, via the display generation component, a first sticker interface including a sticker creation option that is selectable to initiate a process for generating a new sticker ([Col 7, 3-19]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The image is used as a base image for the custom sticker creation process (i.e. displaying interface fig. 3(301) for sticker creation)). Regarding claim 89, Song in view of Van and Anzures teaches the computer system of claim 88. Song teaches the one or more programs further including instruction for: while displaying the first sticker interface, detecting, via the one or more input devices, a set of one or more inputs directed to the sticker creation option including a selection of a photos option ([Col 7, 3-19]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The interface 301 additionally includes a text instruction 310 to take a photo or to select an image from a local saved images store on the mobile device. As shown in the example of FIG. 3, a selection of images 315 from the local saved store is displayed for selection. The image is used as a base image for the custom sticker creation process (i.e. displaying interface fig. 3(301, 315) for selecting image for sticker creation)); and in response to detecting the set of one or more inputs directed to the sticker creation option including a selection of the photos option, displaying a collection of media items that are selectable for generating the new sticker ([Col 7, 3-19]: As shown in the example of FIG. 3, a selection of images 315 from the local saved store is displayed for selection. The image is used as a base image for the custom sticker creation process (i.e. sticker creation is from a photo)). Regarding claim 90, Song in view of Van and Anzures teaches the computer system of claim 89. Song teaches wherein the collection of media items is a subset of media items available at the computer system ([Col 7, 3-19]: As shown in the example of FIG. 3, a selection of images 315 from the local saved store is displayed for selection (i.e. images are ones available to generate stickers)). Regarding claim 91, Song in view of Van and Anzures teaches the computer system of claim 90. Song teaches wherein the collection of media items includes a set of one or more media category options that are selectable to update the subset of media items displayed in the collection of media items based on a selected media category option ([Col 7, 3-19]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The interface 301 additionally includes a text instruction 310 to take a photo or to select an image from a local saved images store on the mobile device. As shown in the example of FIG. 3, a selection of images 315 from the local saved store is displayed for selection. The image is used as a base image for the custom sticker creation process (i.e. as shown in fig. 3(305, 315), it allows switching between choosing an image or capturing an image)). Regarding claim 92, Song in view of Van and Anzures teaches the computer system of claim 88. Song teaches the one or more programs further including instruction for: while displaying the first sticker interface, detecting, via the one or more input devices, a set of one or more inputs directed to the sticker creation option including a selection of a camera option ([Col 7, 3-19]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The interface 301 additionally includes a text instruction 310 to take a photo or to select an image from a local saved images store on the mobile device. As shown in the example of FIG. 3, a selection of images 315 from the local saved store is displayed for selection. The image is used as a base image for the custom sticker creation process (i.e. as shown in fig. 3(305, 315), the sticker creating interface allows selection of camera option)); and in response to detecting the set of one or more inputs directed the sticker creation option including a selection of the camera option, displaying a camera user interface for capturing an image to use for generating the new sticker ([Col 7, 3-19]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The interface 301 additionally includes a text instruction 310 to take a photo or to select an image from a local saved images store on the mobile device. The image is used as a base image for the custom sticker creation process (i.e. as shown in fig. 3(305), the sticker creating interface allows capturing of image for generation of sticker by activation of camera option)). Regarding claim 93, Song in view of Van and Anzures teaches the computer system of claim 92. Song teaches wherein the computer system is in communication with a camera, the one or more programs further including instructions for: while displaying the camera user interface, detecting, via the one or more input devices, a request to capture an image using the camera ([Col 7, 3-22]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The messaging application 112 identifies the image source as the back facing camera of the client device 110 and provides an interface 302 allowing a user to provide input indicating a subject of the custom sticker (i.e. receiving user request to capture an image)); in response to detecting the request to capture an image using the camera, capturing an image using the camera ([Col 7, 3-22]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The image is used as a base image for the custom sticker creation process. The messaging application 112 identifies the image source as the back facing camera of the client device 110 and provides an interface 302 allowing a user to provide input indicating a subject of the custom sticker (i.e. capturing an image using camera based on user request)); and displaying the captured image with an option that is selectable to generate a sticker having an appearance based on an object detected in the captured image ([Col 7, 3-22]: Upon receiving a request to create a custom sticker, the messaging application 112 provides an interface 301 including a button 305 to capture an image using the selected back facing camera. The image is used as a base image for the custom sticker creation process. The messaging application 112 identifies the image source as the back facing camera of the client device 110 and provides an interface 302 allowing a user to provide input indicating a subject of the custom sticker. A cropping layer 325 is illustrated above the image and indicates non-selected areas of the sticker. (i.e. fig. 3(302) displays image captured that is used to generate sticker)). Regarding claim 95, Song in view of Van and Anzures teaches the computer system of claim 73. Song teaches the one or more programs further including instruction for: while displaying, via the display generation component, a messaging interface that includes a respective message, receiving a set of one or more inputs corresponding to the respective message ([Col 7, 57-62]: The user of the messaging application 112 can select the custom sticker 375 to send it to other users of the messaging system 130 on messaging threads 380. As shown in FIG. 3, the custom sticker interface 304 additionally includes an option 385 to create one or more additional custom stickers.); and in response to receiving the set of one or more inputs corresponding to the respective message, displaying an option that is selectable to initiate a process for applying a sticker from the plurality of stickers to the respective message ([Col 7, 57-62]: The user of the messaging application 112 can select the custom sticker 375 to send it to other users of the messaging system 130 on messaging threads 380. (i.e. sticker can be selected to be applied to the message as shown in fig. 3(message sent at 1:30pm))). Regarding Claims 97-98, they do not teach or further define over claim 1. Therefore, claims 97-98 are rejected for the same reason as set forth above in claim 1. Claim 81 is rejected under 35 U.S.C. 103 as being unpatentable over Song in view of Van and Anzures further in view of An et al. (US 20230087879 A1, hereinafter An) Regarding claim 81, Song in view of Van and Anzures teaches the computer system of claim 73. Song in view of Van and Anzures however does not teach wherein generating the respective sticker having an appearance based on the representation of the respective object that was automatically selected in the media item includes: in accordance with a determination that the media item has an animated effect, generating the respective sticker having a sticker animated effect based on the animated effect of the media item; and in accordance with a determination that the media item does not have the animated effect, generating the respective sticker without the sticker animated effect. An teaches wherein generating the respective sticker having an appearance based on the representation of the respective object that was automatically selected in the media item includes: in accordance with a determination that the media item has an animated effect, generating the respective sticker having a sticker animated effect based on the animated effect of the media item (Fig. 11 and [181]: When the processor 120 or 210 recognizes a body motion in which the subject is waving a hand, the processor 120 or 210 may recognize that the motion feature thereof matches a message “hello” and insert the text “hello” into the emoji sticker background. Alternatively, if the subject's body is recognized as a motion feature of bowing, the processor may recognize that the corresponding motion feature also matches the message “hello” and insert the text “hello” into the emoji sticker background (i.e. if the media item have the motion or effect, generating sticker with the effect or text)); and in accordance with a determination that the media item does not have the animated effect, generating the respective sticker without the sticker animated effect ([179-180]: The processor 120 or 210 may determine whether or not a message or text matching features of the body motion is recognized. In operation 1130, in the case where a message or text matching the motion features is recognized, the processor 120 or 210 may generate an emoji sticker by inserting the recognized message or text into a background image (i.e. if the media item do not have the motion or effect, generating sticker without inserting effect or text)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song in view of Van and Anzures to incorporate the teachings of An and generating the respective sticker having an appearance based on the representation of the respective object in the media item includes: in accordance with a determination that the media item has an animated effect, generating the respective sticker having a sticker animated effect based on the animated effect of the media item; and in accordance with a determination that the media item does not have the animated effect, generating the respective sticker without the sticker animated effect.. One of ordinary skilled in the art would have been motivated to combine the teachings in order to generate an emoji sticker (An, [177]). Claim 86 is rejected under 35 U.S.C. 103 as being unpatentable over Song in view of Van and Anzures further in view of Barlier et al. (US 20200302669 A1, hereinafter Barlier) Regarding claim 86, Song in view of Van and Anzures teaches the computer system of claim 73. Song in view of Van and Anzures however does not teach the one or more programs further including instruction for: displaying, via the display generation component, a first sticker received from a remote computer system; detecting, via the one or more input devices, an input directed to the first sticker; and in response to detecting the input directed to the first sticker, displaying, via the display generation component, a sticker option that is selectable to add the first sticker to the plurality of stickers. Barlier teaches the one or more programs further including instruction for: displaying, via the display generation component, a first sticker received from a remote computer system ([258]: FIG. 7A depicts message interface 608 after having received animated virtual avatar 700 from the remote user named “John”. After receiving it, device 600 plays animated virtual avatar 700 automatically in some embodiments (i.e. displaying the virtual avatar received from remote user)); detecting, via the one or more input devices, an input directed to the first sticker (Fig. 7I-J and [261]: In response to user input on animated virtual avatar 700 (e.g., a tap and hold gesture represented by contact 720 in FIG. 7I), device 600 displays a menu of options related to animated virtual avatar 700, as depicted in FIG. 7J (i.e. detecting user input to the virtual avatar received)); and in response to detecting the input directed to the first sticker, displaying, via the display generation component, a sticker option that is selectable to add the first sticker to the plurality of stickers (Fig. 7I-J and [261]: In response to user input on animated virtual avatar 700 (e.g., a tap and hold gesture represented by contact 720 in FIG. 7I), device 600 displays a menu of options related to animated virtual avatar 700, as depicted in FIG. 7J. Additionally, menu 724 is also displayed having copy button 726, save button 728, and more button 730. Save button 728 saves animated virtual avatar 700 to device 600 (e.g., to a database or library that can be later access by applications installed on device 600) (i.e. user selection of save button, saves the virtual avatar to add to the library)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song in view of Van and Anzures to incorporate the teachings of Barlier and displaying, via the display generation component, a first sticker received from a remote computer system; detecting, via the one or more input devices, an input directed to the first sticker; and in response to detecting the input directed to the first sticker, displaying, via the display generation component, a sticker option that is selectable to add the first sticker to the plurality of stickers. One of ordinary skilled in the art would have been motivated to combine the teachings in order to save animated virtual avatar to device (Barlier, [261]). Claim 87 is rejected under 35 U.S.C. 103 as being unpatentable over Song in view of Van and Anzures further in view of Dryer et al. (US 2020/0358726 A1, hereinafter Dryer). Regarding claim 87, Song in view of Van and Anzures teaches the computer system of claim 73. Song in view of Van and Anzures however does not teach the one or more programs further including instruction for: while displaying a media editing interface that includes the media item, displaying the plurality of stickers including the respective sticker; detecting, via the one or more input devices, a second set of one or more inputs that includes a selection of the respective sticker from the plurality of stickers; and in response to detecting the second set of one or more inputs, modifying the media item to include a representation of the respective sticker. Dryer teaches the one or more programs further including instruction for: while displaying a media editing interface that includes the media item, displaying the plurality of stickers including the respective sticker ([323-324]: In FIG. 9G, device 600 detects input 936 on woman avatar option 921-2 and, in response, displays live pose user interface 926 in FIG. 9H with avatar 928 having an appearance that corresponds to woman avatar option 921-2 selected in FIG. 9G (i.e. displaying live pose of user with interface to edit using stickers)); detecting, via the one or more input devices, a second set of one or more inputs that includes a selection of the respective sticker from the plurality of stickers ([324]: In FIG. 9I, device 600 detects (e.g., via camera 602) the user's face having a pose that includes a smile and head tilt and modifies avatar 928 to assume the same pose. Device 600 detects input 938 on capture affordance 930, which causes device 600 to select the current pose of avatar 928 (i.e. detecting different pose of user allowing selection of different avatar)); and in response to detecting the second set of one or more inputs, modifying the media item to include a representation of the respective sticker ([326]: After capturing the avatar pose in FIG. 9I or 9J, device 600 displays scaling user interface 946 for changing a position and scale of selected avatar pose 948 as shown in FIG. 9K. In some embodiments, avatar pose 948 is moved (e.g., moved within the circular frame) in response to swipe gestures detected while displaying scaling interface 946 (i.e. modifying media with the representation of sticker)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song in view of Van and Anzures to incorporate the teachings of Dryer and while displaying a media editing interface that includes the media item, displaying the plurality of stickers including the respective sticker; detecting, via the one or more input devices, a second set of one or more inputs that includes a selection of the respective sticker from the plurality of stickers; and in response to detecting the second set of one or more inputs, modifying the media item to include a representation of the respective sticker. One of ordinary skilled in the art would have been motivated to combine the teachings in order to display avatars for a contacts application user interface (Dryer, [309]). Claim 94 is rejected under 35 U.S.C. 103 as being unpatentable over Song in view of Van and Anzures further in view of Cragg et al. (US 20200344411 A1, hereinafter Cragg). Regarding claim 94, Song in view of Van and Anzures teaches the computer system of claim 73. Song in view of Van and Anzures however does not teach wherein the computer system is in communication with a camera, the one or more programs further including instruction for: while displaying a camera user interface that includes a capture affordance that is selectable for capturing an image to use for generating a new sticker, detecting, via the one or more input devices, a set of one or more inputs corresponding to a request to generate a sticker from the captured image; and in response to detecting the set of one or more inputs corresponding to a request to generate a sticker from the captured image, generating a new sticker from the captured image, including: in accordance with a determination that the set of one or more inputs corresponding to a request to generate a sticker from a captured image includes an input of a first type directed to the capture affordance, generating the new sticker with an animated effect; and in accordance with a determination that the set of one or more inputs corresponding to a request to generate a sticker from a captured image includes an input of a second type directed to the capture affordance, wherein the input of the second type is different from the input of the first type, generating the new sticker without the animated effect. Cragg teaches wherein the computer system is in communication with a camera, the one or more programs further including instruction for: while displaying a camera user interface that includes a capture affordance that is selectable for capturing an image to use for generating a new sticker (Fig. 2(202), fig. 5 and [34]: At block 202, the process 200 involves obtaining a live preview image 106 from a camera 104 of the computing device 102. For instance, the camera 104 continuously and directly projects the image formed by the lens of the camera 104 onto the image sensor to generate the live preview image 106), detecting, via the one or more input devices, a set of one or more inputs corresponding to a request to generate a sticker from the captured image ([51]: At block 210, the process 200 involves receiving a user confirmation of enabling the context-aware image filtering mode through the user interface 112. Fig. 6(604) and [66]: The user can thus select the contextual user interface control 604 to enter the context-aware image filtering mode to use the recommended filter.); and in response to detecting the set of one or more inputs corresponding to a request to generate a sticker from the captured image, generating a new sticker from the captured image ([51]: At block 210, the process 200 involves receiving a user confirmation of enabling the context-aware image filtering mode through the user interface 112. Fig. 7 and [67-68]: The user interface 700 further includes a filter panel 706 to show a list of filters including the recommended filter, the applicable filters, and the available filters.), including: in accordance with a determination that the set of one or more inputs corresponding to a request to generate a sticker from a captured image includes an input of a first type directed to the capture affordance, generating the new sticker with an animated effect ([51-53]: The image filtering module 306 applies the recommended filter or the selected filter to the full resolution of live preview image 106, i.e. the live preview image 106 that has not been down-sampled [69]: FIG. 8 depicts an example of the user interface where the user selects a different filter than the recommended filter. In this example, the filter “Pop Art” is selected by the user and thus the filtered image 804 shown in the user interface 800 is filtered using the selected filter rather than the recommended filter “Portrait.” (i.e. generating image with the effect selected by the user)); and in accordance with a determination that the set of one or more inputs corresponding to a request to generate a sticker from a captured image includes an input of a second type directed to the capture affordance, wherein the input of the second type is different from the input of the first type, generating the new sticker without the animated effect ([53]: Face tracking can be implemented to track the face object detected in the live preview image 106 so that if the user moves the camera 104, the detected face can still be located in the new live preview image 106 for filtering. [68]: The user interface 700 also includes a user interface control 714 which, when selected by the user, will cause the image processing application 108 to exit the context-aware image filtering mode and thus to disable the filter that has been applied to the live preview image (i.e. generating image without the effect)). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song in view of Van and Anzures to incorporate the teachings of Cragg and while displaying a camera user interface that includes a capture affordance that is selectable for capturing an image to use for generating a new sticker, detecting, via the one or more input devices, a set of one or more inputs corresponding to a request to generate a sticker from the captured image; and in response to detecting the set of one or more inputs corresponding to a request to generate a sticker from the captured image, generating a new sticker from the captured image, including: in accordance with a determination that the set of one or more inputs corresponding to a request to generate a sticker from a captured image includes an input of a first type directed to the capture affordance, generating the new sticker with an animated effect; and in accordance with a determination that the set of one or more inputs corresponding to a request to generate a sticker from a captured image includes an input of a second type directed to the capture affordance, wherein the input of the second type is different from the input of the first type, generating the new sticker without the animated effect. One of ordinary skilled in the art would have been motivated to combine the teachings in order for context-aware image generation (Cragg, [65]). Claim 96 is rejected under 35 U.S.C. 103 as being unpatentable over Song in view of Van and Anzures further in view of Peterson et al. (US 20170357415 A1, hereinafter Peterson). Regarding claim 96, Song in view of Van and Anzures teaches the computer system of claim 95. Song teaches wherein the process for applying the sticker from the plurality of stickers to the respective message includes: while displaying, via the display generation component, the messaging interface including the respective message and the sticker from the plurality of stickers: detecting a gesture directed to the sticker ([Col 7, 55-64]: The custom sticker 375 is then acce
Read full office action

Prosecution Timeline

Nov 27, 2023
Application Filed
Dec 10, 2024
Response after Non-Final Action
Dec 12, 2024
Non-Final Rejection — §103
Feb 26, 2025
Examiner Interview Summary
Feb 26, 2025
Applicant Interview (Telephonic)
Feb 27, 2025
Response Filed
Mar 14, 2025
Final Rejection — §103
May 21, 2025
Examiner Interview Summary
May 21, 2025
Applicant Interview (Telephonic)
May 27, 2025
Request for Continued Examination
Jun 03, 2025
Response after Non-Final Action
Jun 13, 2025
Non-Final Rejection — §103
Oct 09, 2025
Applicant Interview (Telephonic)
Oct 09, 2025
Examiner Interview Summary
Oct 14, 2025
Response Filed
Dec 01, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561442
PRIORITIZING VULNERABILITIES
2y 5m to grant Granted Feb 24, 2026
Patent 12561423
GENERATING TOKEN VALUE FOR ENABLING A NON-APPLICATION CHANNEL TO PERFORM OPERATION
2y 5m to grant Granted Feb 24, 2026
Patent 12563092
CREDENTIAL-STUFFING ANOMALY DETECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12556397
SYSTEM FOR DIGITAL IDENTITY DETECTION AND VERIFICATION WHEN TRAVERSING BETWEEN VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12530466
INTELLIGENT PRE-BOOT INDICATORS OF VULNERABILITY
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
91%
With Interview (+22.2%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 333 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month