Prosecution Insights
Last updated: April 19, 2026
Application No. 18/886,813

EFFECT DISPLAY METHOD, APPARATUS AND DEVICE, STORAGE MEDIUM, AND PRODUCT

Non-Final OA §103
Filed
Sep 16, 2024
Examiner
TILAHUN, ALAZAR
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Non-Final)
71%
Grant Probability
Favorable
2-3
OA Rounds
2y 11m
To Grant
85%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
464 granted / 654 resolved
+12.9% vs TC avg
Moderate +14% lift
Without
With
+14.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 654 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remark filed 12/08/2025, with respect to the rejection(s) of claim(s) 21-34 under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Ai et al. Pub. No.: US 2017/0256288 . Terminal Disclaimer The terminal disclaimer filed on 12/31/2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of U.S. Patent No. 12096047 has been reviewed and is accepted. The terminal disclaimer has been recorded. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-34 are rejected under 35 U.S.C. 103 as being unpatentable over Qu et al. Pub. No.: US 2018/0124477 (Hereinafter “Qu”) in view of Ai et al. Pub. No.: US 2017/0256288 (Hereinafter “Ai”) Regarding Claim 21, Qu discloses an effect display method, comprising: displaying a live stream of content (see paragraph [0081]: live video stream GUI 304 includes the live video stream display 306,); receiving an effect trigger operation (see fig. 2B and paragraph [0045]: GUI 214 includes various controls with which the broadcaster can interact to setup or otherwise define call-to-action characteristics and/or trigger settings by which the call-to-action system 100 will provide a call to action to one or more viewers of a live video stream.); and in response to receiving the effect trigger operation, displaying an effect, wherein the effect is generated based on effect information corresponding to the effect trigger operation and a live stream image corresponding to the live stream of content (see paragraph [0071]: in response to a viewer interacting with a call to action (e.g., providing a touch gesture with respect to a call-to-action element), the call-to-action system 100 can provide a notification to the viewer. Accordingly, the broadcaster can configure various aspects of the notifications provided by the call-to-action system 100 via the control 220f. For example, the notification may inform the viewer that the viewer did not receive the benefit corresponding to the call to action (e.g., he did not tap on the floating birthday present quickly enough) or may inform the viewer that the viewer did receive the benefit associated with the call to action (e.g., “You Won!”). Additionally, the call-to-action system 100 can provide extra information to viewers via notifications including how long it will be before another call to action is triggered (e.g., “Don't leave yet! Another prize will be offered within the next 5 minutes!”), an engagement level that must be reached before another call to action is triggered (e.g., “Just 5 more comments and a prize will be released!), and so forth.). Qu fails to disclose: the effect comprises an effect image corresponding to the effect information and the live stream image corresponding to the live stream of content. In analogous art, Ai teaches: the effect comprises an effect image corresponding to the effect information (see include, but is not limited to, paragraph [0176]: The video edit request may contain at least information indicative of the prologue, the one or more transition effects and the epilogue that is to be merged to the captured video) and the live stream image corresponding to the live stream of content (see include, but is not limited to, paragraph [0176]: The video edit request may occur any time after the video has been captured. This may include while the video is being live-streamed, within seconds after the video is captured, within hours after the video is captured, within days after the video is captured, within months after the video is captured, or any other time after the video is captured.). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Qu with the teaching as taught by Ai in order to merge one or more video clips to the video captured at an image capture device based on the video edit request, thereby delivering an effective approach for efficiently transmitting and editing video captured by an image capture device. Regarding Claim 22, Qu in view of Ai teach the method as discussed in the rejection of claim 21. Qu further discloses the method as discussed in the rejection of claim 21. Qu further discloses determining time information corresponding to a time at which the effect trigger operation is received (see paragraphs [0074], [0075] and [0105]). Regarding Claim 23, Qu in view of Ai teach the method as discussed in the rejection of claim 22. Qu further discloses wherein the time information includes: a time when a user watching the live stream of content triggers the effect trigger operation, or a time when a live streamer user of the live stream triggers the effect trigger operation (see paragraphs [0074], [0075] and [0105]). Regarding Claim 24, Qu in view of Ai teach the method as discussed in the rejection of claim 22. Qu further discloses wherein the method further comprises: acquiring an image frame (see paragraph [0078]) corresponding to the time information in response to an effect trigger instruction triggered by the effect trigger operation (see paragraph [0137]). Regarding Claim 25, Qu in view of Ai teach the method as discussed in the rejection of claim 24. Qu further discloses wherein the live stream image is obtained based on the image frame (see figs. 3A-3C and paragraphs [0078, 0084, 0087-0090]). Regarding Claim 26, Qu in view of Ai teach the method as discussed in the rejection of claim 25. Qu further discloses wherein the displaying an effect comprises: displaying the effect on the live stream of content (see figs. 3A-3C and paragraphs [0078, 0084, 0087-0090]). Regarding Claim 27, Qu in view of Ai teach the method as discussed in the rejection of claim 22. Ai further discloses wherein in response to receiving the effect trigger operation and prior to displaying the effect, sending an effect trigger instruction to a server, the effect trigger instruction indicating the effect information, and the effect trigger instruction also indicating an image frame corresponding to the time information in the stream content of the live stream (see paragraph [0176]). Regarding Claim 28, Qu in view of Ai teach the method as discussed in the rejection of claim 27. Qu further discloses wherein after sending the effect trigger instruction to the server and prior to displaying the effect, acquiring an effect display instruction from the server, the effect display instruction indicating target effect information, and the target effect information being associated with the effect information and a target object image in the image frame (see fig.4 and paragraphs [0126-0129]). Regarding Claim 29, Qu in view of Ai teach the method as discussed in the rejection of claim 21. Qu further discloses wherein the live stream of content and the effect are displayed on a client device and the effect is generated on a server and received at the client device from the server (see fig.4 and paragraphs [0126-0129]). Regarding Claim 30, Qu in view of Ai teach the method as discussed in the rejection of claim 21. Ai further discloses wherein the live stream of content and the effect are displayed on a client device and the effect is generated on the client device (see paragraph [0094]). Regarding Claim 31, Qu in view of Ai teach the method as discussed in the rejection of claim 21. Ai further discloses wherein in response to the effect trigger operation, determining time information corresponding to the effect trigger operation, determining an image frame corresponding to the time information in the live stream of content, and determining the effect information corresponding to the effect trigger operation (see paragraph [0214]). Regarding Claim 32, Qu in view of Ai teach the method as discussed in the rejection of claim 21. Qu further discloses wherein the effect trigger operation is a virtual object-gifting operation of a user watching the live stream of content (see fig.2B: element 224). Regarding Claim 33, Qu in view of Ai teach an effect display device (see fig.6), comprising: a memory (see fig.6: memory 604) and a processor (see fig.6: processor 602); the memory is configured to store computer instructions; the processor is configured to execute the computer instructions stored in the memory (see paragraph [0150]), so that the effect display device implements the effect display method according to claim 21. Regarding Claim 34, the claim is directed toward embody the method of claim 21 in a non-transitory computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions that, when executed by a computing device, causes the computing device to implement the effect display method according to claim 21. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alazar Tilahun whose telephone number is (571)270-5712. The examiner can normally be reached Monday -Friday, From 9:00 AM-6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAZAR TILAHUN/ Primary Examiner Art Unit 2424 /A.T/Primary Examiner, Art Unit 2424
Read full office action

Prosecution Timeline

Sep 16, 2024
Application Filed
Dec 27, 2024
Response after Non-Final Action
Sep 06, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603856
INFORMATION REPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER STORAGE MEDIUM, AND PRODUCT
2y 5m to grant Granted Apr 14, 2026
Patent 12603967
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12598363
SYSTEMS AND METHODS FOR PROVIDING SEXUAL ENTERTAINMENT BY MONITORING TARGET ELEMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12590901
METHOD FOR INSPECTING A COATED SURFACE FOR COATING DEFECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12591722
INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
71%
Grant Probability
85%
With Interview (+14.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 654 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month