Prosecution Insights
Last updated: April 19, 2026
Application No. 18/356,271

INFORMATION DISPLAY METHOD BASED ON SESSION, APPARATUS, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT

Final Rejection §103
Filed
Jul 21, 2023
Examiner
SHIBEROU, MAHELET
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
409 granted / 561 resolved
+17.9% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
592
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 561 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Amendment filed on 1/10/2026. Claims 1-20 are pending in the case. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-11, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sha et al. (CN115185418, hereinafter Sha – cited in IDS 3/29/2024) in view of Shao et al. (US 20240127518 A1, hereinafter Shao) and further in view of Lira (US 20060041848 A1). As to independent claim 1, Sha teaches an information display method based on session, performed by an electronic device, comprising: presenting at least one emoticon element in a session interface, the session interface being configured to implement a session with at least one session object (“Displaying an expression input panel on the first conversation interface, the expression input panel including a plurality of expression images.” Paragraph 0008); in response to an ejection operation for a target emoticon element in the at least one emoticon element (“In response to the touch operation of selecting the target expression image from the plurality of expression images, and in response to the drag operation of dragging the target expression image to the expression ejection area,” Paragraph 0009); and displaying an ejection animation of the target emoticon element being ejected in response to an ejection instruction triggered (“play the target expression ejection corresponding to the target expression image animation.” Paragraph 0009). Sha does not appear to expressly teach presenting ejection guidance information corresponding to the target emoticon element and displaying an ejection animation of the target emoticon element being ejected in response to an ejection instruction triggered based on the ejection guidance information. Shao teaches presenting ejection guidance information corresponding to the target emoticon element and displaying an ejection animation of the target emoticon element… based on the ejection guidance information (“In addition, the control component 14c may control the first object 14a to start moving along the indicated path, i.e., control the emission of the first object 14a” Paragraph 0113, “For example, with reference to the user interface 14 shown by FIG. 1D, the first guidance information includes: a guiding image 14e and a text prompt area 14f, where the guiding image 14e is an arc with arrows at both ends to indicate that the user may slide towards any direction pointed by the arrow head; and the text prompt area 14f may display a text content therein, like “emit by swiping left and right and dragging”. Accordingly, the user can obtain text descriptive information for operating the control component…” paragraph 0120). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise presenting ejection guidance information corresponding to the target emoticon element and displaying an ejection animation of the target emoticon element being ejected in response to an ejection instruction triggered based on the ejection guidance information. One would have been motivated to make such a combination to improve the flexibility of animation playing and the utilization of hardware resources of a device. Sha does not appear to expressly teach after the ejection animation of the target emoticon element being ejected is displayed in the session interface, presenting, in the session interface, a weak reminder session message including text prompt information corresponding to the ejection animation and a playback entry corresponding to the ejection animation, the text prompt information indicating that the ejection operation was performed for the target emoticon element, the playback entry being configured to, when triggered, redisplay the ejection animation of the target emoticon element being ejected in the session interface, such that a recipient not in the session interface when the ejection animation was initially displayed can subsequently trigger the playback entry in the session interface to review the ejection animation. Lira teaches after the ejection animation of the target emoticon element being ejected is displayed in the session interface, presenting, in the session interface, a weak reminder session message including text prompt information corresponding to the ejection animation and a playback entry corresponding to the ejection animation, the text prompt information indicating that the ejection operation was performed for the target emoticon element, the playback entry being configured to, when triggered, redisplay the ejection animation of the target emoticon element being ejected in the session interface, such that a recipient not in the session interface when the ejection animation was initially displayed can subsequently trigger the playback entry in the session interface to review the ejection animation (“an overlaid message is a message, e.g. texts, graphics, images, animations, movies, or any combination of them, with or without sounds, delivered from a sender's instant message client to at least one recipient's instant message client. Such message is typically meant to be displayed, e.g. displayed, played, made visible, or otherwise enabled to be perceived, upon recipient's client system input and it often overlays the recipient's session window,” paragraph 0070; “the overlaid message may be inserted as a reminder in the transcript of the recipient's session window and/or any preset area of the recipient's client user interface[….]FIG. 29A depicts, for the preferred embodiment, a reminder 705a that is displayed in the transcript area 101 of the session window 100. The reminder 705a comprises an iconic artwork and text. FIG. 29B depicts, for an alternative embodiment, the reminder 705b that is displayed in the transcript area 101 of the session window 100. The reminder 705b comprises a textual description of the artwork and the text. In the preferred embodiment, the recipient's client may enable the recipient to select the reminder, for example, to have the associated overlaid message be presented or presented again on the recipient's client system screen. Similarly, the sender's client may enable the sender to select the reminder, for example, to have the associated overlaid message be presented or presented again on the sender's client system screen.” Paragraph 0177-0179). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise after the ejection animation of the target emoticon element being ejected is displayed in the session interface, presenting, in the session interface, a weak reminder session message including text prompt information corresponding to the ejection animation and a playback entry corresponding to the ejection animation, the text prompt information indicating that the ejection operation was performed for the target emoticon element, the playback entry being configured to, when triggered, redisplay the ejection animation of the target emoticon element being ejected in the session interface, such that a recipient not in the session interface when the ejection animation was initially displayed can subsequently trigger the playback entry in the session interface to review the ejection animation. One would have been motivated to make such to enable users to communicate with each other in a more dynamic, interactive, and entertaining manner. As to dependent claim 2, Sha teaches the method according to claim 1, Sha teaches the method further comprising: receiving a trigger operation for the target emoticon element, and determining the trigger operation as the ejection operation in response to the trigger operation meeting at least one of following conditions: a trigger duration reaches a target duration, a trigger displacement reaches a target displacement, and a trigger trajectory is a target trajectory (“The touch operation may be a long-press touch operation or a double-click touch operation, etc. In the embodiments of the present application, a long-press touch operation may be used as an example for description. Please refer to FIG. 2b together. By performing a long-pressing touch operation on the target facial expression image 12 with a finger, the sending terminal 10 can correspondingly display the facial expression ejection area in response to the user's touch operation of selecting the target facial expression image 12 from a plurality of facial expression images” paragraph 0117); or receiving an operation combination including at least two continuous operations for the target emoticon element, and determining the operation combination as the ejection operation in response to the operation combination being consistent with a target operation combination. As to dependent claim 4, Sha teaches the method according to claim 1, Sha does not appear to expressly teach the method further comprising: displaying a process of dragging the target emoticon element in response to a drag operation for the target emoticon element triggered based on the ejection guidance information; and receiving the ejection instruction in response to a release instruction for the drag operation in a process of performing the drag operation. Shao teaches the method further comprising: displaying a process of dragging the target emoticon element in response to a drag operation for the target emoticon element triggered based on the ejection guidance information (“Upon receiving a second trigger operation for the control component 14c (i.e., drag and drop operation), the application 1 controls part or all of the first object 14a to start moving along the path indicated by the control component 14c.” paragraph 0113); and receiving the ejection instruction in response to a release instruction for the drag operation in a process of performing the drag operation (“the application 1 receives the second trigger operation for the control component 14c (i.e., drag and drop operation); in response to the drag and drop operation for the control component 14c, the application 1 may schematically display on the mobile phone the user interface 20 illustrated in FIG. 1J. In the user interface 20 shown by FIG. 1J, the “arrow” part included in the bow and arrow moves forward along the path w2 indicated by FIG. 1H, and the “bow” part included in the bow and arrow may disappear after the “arrow” is shot.” Paragraph 0114). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise displaying a process of dragging the target emoticon element in response to a drag operation for the target emoticon element triggered based on the ejection guidance information; and receiving the ejection instruction in response to a release instruction for the drag operation in a process of performing the drag operation. One would have been motivated to make such a combination to enhance user experience. As to dependent claim 5, Sha teaches the method according to claim 4, Sha does not appear to expressly teach the method further comprising: adjusting an ejection direction of the target emoticon element to obtain an adjusted ejection direction in response to a rotation operation for the target emoticon element; wherein receiving the ejection instruction includes: receiving an ejection instruction for instructing ejection in the adjusted ejection direction. Shao teaches adjusting an ejection direction of the target emoticon element to obtain an adjusted ejection direction in response to a rotation operation for the target emoticon element (“Besides, with reference to the situations shown by FIGS. 1H and 1I, in response to the first trigger operation for adjusting the path, the application 1 adjusts the orientation of the first object 14a simultaneously” paragraph 0112); wherein receiving the ejection instruction includes: receiving an ejection instruction for instructing ejection in the adjusted ejection direction (“in response to the drag and drop operation for the control component 14c, the application 1 may schematically display on the mobile phone the user interface 20 illustrated in FIG. 1J. In the user interface 20 shown by FIG. 1J, the “arrow” part included in the bow and arrow moves forward along the path w2 indicated by FIG. 1H, and the “bow” part included in the bow and arrow may disappear after the “arrow” is shot.” Paragraph 0114). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise adjusting an ejection direction of the target emoticon element to obtain an adjusted ejection direction in response to a rotation operation for the target emoticon element; wherein receiving the ejection instruction includes: receiving an ejection instruction for instructing ejection in the adjusted ejection direction. One would have been motivated to make such a combination to enhance user experience. As to dependent claim 6, Sha teaches the method according to claim 1, Sha further teaches wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying, in the session interface, a process of the target emoticon element being ejected and then moving to an edge of the session interface and disappearing (“a target expression image is ejected from the top of the first conversation interface to the bottom,” paragraph 0135, examiner notes that disappearing is a well-known technique in animation) As to dependent claim 7, Sha teaches the method according to claim 1, Sha does not appear to expressly teach wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying a process of the target emoticon element being ejected and moving along an ejection direction indicated by the ejection instruction; and controlling a size of the target emoticon element to gradually change with movement of the target emoticon element. Shao teaches wherein displaying the ejection animation of the target emoticon element being ejected includes: controlling a size of the target emoticon element to gradually change with movement of the target emoticon element (“In one embodiment, the longer the first touch time is, the more times the target expression image is ejected, and the larger the size information of the target expression image at the time of ejection, the shorter the first touch time is, the higher the target expression image is. The fewer times the image is ejected, and the smaller the size information of the target expression image when it is ejected,” paragraph 0135) Sha does not appear to expressly teach wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying a process of the target emoticon element being ejected and moving along an ejection direction indicated by the ejection instruction. Shao teaches displaying a process of the target emoticon element being ejected and moving along an ejection direction indicated by the ejection instruction (“In addition, the control component 14c may control the first object 14a to start moving along the indicated path, i.e., control the emission of the first object 14a” Paragraph 0113,0120). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise displaying a process of the target emoticon element being ejected and moving along an ejection direction indicated by the ejection instruction. One would have been motivated to make such a combination to enhance user experience. As to dependent claim 8, Sha teaches the method according to claim 1, Sha further teaches wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying a plurality of emoticon copies generated in response to the target emoticon element being ejected, and displaying a movement process of the plurality of emoticon copies (Fig. 2C, displaying a movement process of a plurality of smiley emoji copies). As to dependent claim 9, Sha teaches the method according to claim 8, Sha further teaches wherein displaying the movement process of the plurality of emoticon copies includes: displaying movement of the plurality of emoticon copies, and controlling a size of each of the emoticon copies to gradually change with the movement of the plurality of emoticon copies (“In one embodiment, the longer the first touch time is, the more times the target expression image is ejected, and the larger the size information of the target expression image at the time of ejection, the shorter the first touch time is, the higher the target expression image is. The fewer times the image is ejected, and the smaller the size information of the target expression image when it is ejected,” paragraph 0135); or displaying a process of the plurality of emoticon copies moving sequentially along a target trajectory to form a target pattern. As to dependent claim 10, Sha teaches the method according to claim 1, Sha further teaches comprising: displaying ejection information corresponding to the target emoticon element in the session interface in response to the target emoticon element being ejected to an end point (“a target expression image is ejected from the top of the first conversation interface to the bottom,” paragraph 0135); wherein the ejection information includes at least one of following information: an ejection height of the target emoticon element, and a special effect element corresponding to the ejection height (“a target expression image is ejected from the top of the first conversation interface to the bottom, and the size information of the target expression image can be the original size information, that is, the size of the target expression image is the initial state, and with the first touch As the time increases, the animation style of the expression ejection animation can be continuously adjusted.” Paragraph 0135). As to dependent claim 11, Sha teaches the method according to claim 1, Sha further teaches the method further comprising: presenting the target emoticon element in a form of a session message in the session interface (In Fig. 2b-c, the target emoji is presented in the session interface). Claims 17-18 and 20 are substantially the same as claims 1-2 and are therefore rejected under the same rationale presented above. Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Sha et al. in view of Shao et al., Lira, and Cundall et al. (US 20220394001 A1, hereinafter Cundall). As to dependent claim 12, Sha teaches the method according to claim 1, Sha does not appear to expressly teach the method further comprising: displaying an ejection animation of another emoticon element being ejected in a process of displaying the ejection animation; and displaying a collision special effect corresponding to a collision between the target emoticon element and the another emoticon element in response to the target emoticon element colliding with the another emoticon element. Cundall teaches displaying an ejection animation of another emoticon element being ejected in a process of displaying the ejection animation (At FIG. 3B, the user of user device 300 sends a message 304, an emoji of a beer mug).; and displaying a collision special effect corresponding to a collision between the target emoticon element and the another emoticon element in response to the target emoticon element colliding with the another emoticon element (“For example, message 304 and message 306 clink together and a background of the user interface changes colors, includes a confetti bursting animation, or other suitable additional/secondary animation.” Paragraph 0047). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise in response to a trigger operation for the target emoticon element, presenting mode options for selecting a transmitting mode of the target emoticon element; and in response to a selection operation for an ejection mode option, determining the selection operation as the ejection operation. One would have been motivated to make such a combination to enhance user experience. As to dependent claim 13, Sha teaches the method according to claim 12, Sha does not appear to expressly teach wherein displaying the collision special effect includes: in response to the target emoticon element and the another emoticon element being same emoticon elements, displaying a synthesized emoticon element, the synthesized emoticon element and the target emoticon element having a same style, and the synthesized emoticon element being larger than the target emoticon element; in response to the target emoticon element and the another emoticon element being different emoticon elements: displaying an animation in which the target emoticon element and the another emoticon element collide with each other and then are ejected, or displaying a composite emoticon element combined by collision between the target emoticon element and the another emoticon element, the target emoticon element and the another emoticon element being sub-emoticon elements of the composite emoticon element. Cundall teaches in response to the target emoticon element and the another emoticon element being same emoticon elements, displaying a synthesized emoticon element, the synthesized emoticon element and the target emoticon element having a same style, and the synthesized emoticon element being larger than the target emoticon element (“In response to the user's emoji 202, a second emoji 204 is received through the communication platform. Second emoji 204 is the same emoji as emoji 202 and displays a kissing face image (such as an emoji or animation)” paragraph 0039, as shown in fig. 2C, the emoji 206 is larger than the emojis 202 and 204); and in response to the target emoticon element and the another emoticon element being different emoticon elements: displaying an animation in which the target emoticon element and the another emoticon element collide with each other and then are ejected, or displaying a composite emoticon element combined by collision between the target emoticon element and the another emoticon element, the target emoticon element and the another emoticon element being sub-emoticon elements of the composite emoticon element (“Thus, a first emoji of a drink glass and a second emoji of a drink glass triggers a cheers animation where the glasses “clink” together. The glasses may not be exactly the same emoji but may be different emojis of a same type. For example, a beer glass emoji and a martini glass emoji may produce an animation of a beer glass clinking a martini glass.” Paragraph 0063). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise wherein displaying the collision special effect includes: in response to the target emoticon element and the another emoticon element being same emoticon elements, displaying a synthesized emoticon element, the synthesized emoticon element and the target emoticon element having a same style, and the synthesized emoticon element being larger than the target emoticon element; in response to the target emoticon element and the another emoticon element being different emoticon elements: displaying an animation in which the target emoticon element and the another emoticon element collide with each other and then are ejected, or displaying a composite emoticon element combined by collision between the target emoticon element and the another emoticon element, the target emoticon element and the another emoticon element being sub-emoticon elements of the composite emoticon element. One would have been motivated to make such a combination to enhance user experience. Claims 3 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sha et al. in view of Shao et al., Lira, and Wang et al. (US 20150334075 A1, hereinafter Wang). As to dependent claim 3, Sha teaches the method according to claim 1, Sha does not appear to expressly teach the method further comprising: in response to a trigger operation for the target emoticon element, presenting mode options for selecting a transmitting mode of the target emoticon element; and in response to a selection operation for an ejection mode option, determining the selection operation as the ejection operation. Wang teaches the method further comprising: in response to a trigger operation for the target emoticon element, presenting mode options for selecting a transmitting mode of the target emoticon element (“the first GUI object displays a modal window 1200 within the user interface provided by the mobile computing device 1000. This modal window 1200 enables the user to configure the configurable parameters of the first GUI object. As described further below, these configurable parameters enable the user to control the target of the first GUI object,” Paragraph 0137); and in response to a selection operation for a mode option, determining the selection operation as the target mode operation (“the route the representation 1002 of the first GUI object will traverse, the actions the first GUI object will perform, the timing of the actions, and the animation displayed by the representation 1002 of the first GUI object.” Paragraph 0137). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise in response to a trigger operation for the target emoticon element, presenting mode options for selecting a transmitting mode of the target emoticon element; and in response to a selection operation for an ejection mode option, determining the selection operation as the ejection operation. One would have been motivated to make such a combination to enhance user experience. Claim 19 is substantially the same as claim 3 and is therefore rejected under the same rationale presented above. Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Sha et al. in view of Shao et al., Lira, and Han (US 20140002502 A1). As to dependent claim 14, Sha teaches the method according to claim 1, Shat does not appear to expressly teach wherein a presentation form of the ejection guidance information is an ejection prop carrying the target emoticon element; the method further comprising: presenting an ejection auxiliary tool of the ejection prop; in response to a drag operation for the ejection prop, displaying a process of dragging the ejection prop and deformation of the ejection auxiliary tool; and receiving the ejection instruction in response to the drag operation is released during the process of dragging the ejection prop; wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying an animation of the target emoticon element being ejected under an action of an elastic force generated by the ejection auxiliary tool restoring to an original shape. Shao teaches wherein a presentation form of the ejection guidance information is an ejection prop carrying the target emoticon element; the method further comprising: presenting an ejection auxiliary tool of the ejection prop (Fig.1D, object 14a); in response to a drag operation for the ejection prop, displaying a process of dragging the ejection prop (“when the user is adjusting the path by swiping left and right and controlling the emission of the first object,” paragraph 0122); and receiving the ejection instruction in response to the drag operation is released during the process of dragging the ejection prop (“Upon receiving a second trigger operation for the control component 14c (i.e., drag and drop operation), the application 1 controls part or all of the first object 14a to start moving along the path indicated by the control component 14c.” paragraph 0113); wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying an animation of the target emoticon element being ejected under an action of an elastic force generated by the ejection auxiliary tool restoring to an original shape (“The present disclosure is not restricted in display parameters of the first object 14a, including the display brightness, the size, the color, the saturation, the animation effects etc. For example, if the first object 14a is the bow and arrow, the “bow” may be in a golden color and the arrow body of the “arrow” may also be in the same color as the “bow”, while the arrow head may be in a red heart shape.” Paragraph 0079). Sha does not appear to expressly teach in response to a drag operation for the ejection prop, displaying deformation of the ejection auxiliary tool. Ha teaches in response to a drag operation for an object, displaying deformation of the object (“The displayed graphics 400 are then manipulated to provide transformation effects of a translation (in the case of FIGS. 4a and 4b), and a stretch and a shrink (in the case of FIG. 4c). The algorithm operates by, in response to detecting the gesture, determining the initiation point of the gesture (i.e. where the gesture begins) and determining the corresponding spatial point within the displayed graphics 400-1” paragraph 0071). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise in response to a drag operation for the ejection prop, displaying deformation of the ejection auxiliary tool. One would have been motivated to make such a combination to enhance user experience. As to dependent claim 15, Sha teaches the method according to claim 14, Sha does not appear to expressly teach the method further comprising: adjusting a display style of the ejection auxiliary tool with a change of a position of the ejection prop in the process of dragging the ejection prop, so as to enable the display style of the ejection auxiliary tool to correspond to the position of the ejection prop. Shao teaches the method further comprising: adjusting a display style of the ejection auxiliary tool with a change of a position of the ejection prop in the process of dragging the ejection prop, so as to enable the display style of the ejection auxiliary tool to correspond to the position of the ejection prop (For example, compare object 14a position in Fig. 1G and 1H). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise adjusting a display style of the ejection auxiliary tool with a change of a position of the ejection prop in the process of dragging the ejection prop, so as to enable the display style of the ejection auxiliary tool to correspond to the position of the ejection prop. One would have been motivated to make such a combination to enhance user experience. As to dependent claim 16, Sha teaches the method according to claim 1, Sha does not appear to expressly teach wherein a presentation form of the ejection guidance information is an ejection prop carrying the target emoticon element; the method further comprising: presenting an ejection auxiliary tool connected to the ejection prop in response to a trigger operation for the ejection prop; in response to a drag operation for the ejection auxiliary tool, displaying a movement process of the ejection auxiliary tool and displaying a process of stretching deformation of a connecting member between the ejection auxiliary tool and the ejection prop with movement of the ejection auxiliary tool; and receiving the ejection instruction in response to the drag operation is released during a process of dragging the ejection auxiliary tool; wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying an ejection animation of the target emoticon element being ejected under an action of an elastic force generated by stretching deformation of the connecting member. Shao teaches the method further comprising: presenting an ejection auxiliary tool connected to the ejection prop in response to a trigger operation for the ejection prop (Fig. 1D, 14a); receiving the ejection instruction in response to the drag operation is released during a process of dragging the ejection auxiliary tool (“the control component 14c may control the first object 14a to start moving along the indicated path, i.e., control the emission of the first object 14a. Upon receiving a second trigger operation for the control component 14c (i.e., drag and drop operation), the application 1 controls part or all of the first object 14a to start moving along the path indicated by the control component 14c.” paragraph 0113); wherein displaying the ejection animation of the target emoticon element being ejected includes: displaying an ejection animation of the target emoticon element being ejected (FIG. 1J, the “arrow” part included in the bow and arrow moves forward along the path w2 indicated by FIG. 1H,) Sha and Shao do not appear to expressly teach in response to a drag operation for the ejection auxiliary tool, displaying a movement process of the ejection auxiliary tool and displaying a process of stretching deformation of a connecting member between the ejection auxiliary tool and the ejection prop with movement of the ejection auxiliary tool; and displaying an action of an elastic force generated by stretching deformation of the connecting member. Ha teaches in response to a drag operation for an object, displaying an action of an elastic force generated by stretching deformation of the object (“The displayed graphics 400 are then manipulated to provide transformation effects of a translation (in the case of FIGS. 4a and 4b), and a stretch and a shrink (in the case of FIG. 4c). The algorithm operates by, in response to detecting the gesture, determining the initiation point of the gesture (i.e. where the gesture begins) and determining the corresponding spatial point within the displayed graphics 400-1” paragraph 0071). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sha to comprise in response to a drag operation for the ejection auxiliary tool, displaying a movement process of the ejection auxiliary tool and displaying a process of stretching deformation of a connecting member between the ejection auxiliary tool and the ejection prop with movement of the ejection auxiliary tool; and displaying an action of an elastic force generated by stretching deformation of the connecting member. One would have been motivated to make such a combination to enhance user experience. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Feng et al. US 20240364646 A1 teaches displaying a weak reminder notification for inactive client. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHELET SHIBEROU whose telephone number is (571)270-7493. The examiner can normally be reached Monday-Friday 9:00 AM-5:00 PM Eastern Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHELET SHIBEROU/Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Oct 08, 2025
Non-Final Rejection — §103
Nov 07, 2025
Interview Requested
Nov 24, 2025
Applicant Interview (Telephonic)
Nov 28, 2025
Examiner Interview Summary
Jan 10, 2026
Response Filed
Feb 21, 2026
Final Rejection — §103
Mar 23, 2026
Interview Requested
Apr 09, 2026
Examiner Interview Summary
Apr 09, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596535
Editing User Interfaces using Free Text
2y 5m to grant Granted Apr 07, 2026
Patent 12591348
ELECTRONIC DEVICE FOR CONTROLLING DISPLAY OF MULTIPLE WINDOW, OPERATION METHOD THEREOF, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12591419
Prompt Based Hyper-Personalization of User Interfaces
2y 5m to grant Granted Mar 31, 2026
Patent 12578845
CUSTOMIZED GRAPHICAL USER INTERFACE GENERATION GRAPHICALLY DEPICTING ICONS VIA A COMPUTER SCREEN
2y 5m to grant Granted Mar 17, 2026
Patent 12572270
USER INTERFACE FOR DISPLAYING AND MANAGING WIDGETS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+27.8%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 561 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month