Prosecution Insights
Last updated: April 19, 2026
Application No. 18/715,060

IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM

Final Rejection §103
Filed
May 30, 2024
Examiner
SCHNURR, JOHN R
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
678 granted / 943 resolved
+13.9% vs TC avg
Moderate +11% lift
Without
With
+10.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
27 currently pending
Career history
970
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 943 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the Amendment After Non-Final Rejection filed 02/04/2026. Claims 1-12, 15, 16 and 19-24 are pending and have been examined. Response to Arguments Applicant's arguments filed 02/04/2026 have been fully considered but they are not persuasive. In response to applicant’s argument that Zhang (US 2025/0088679) does not disclose adding a sticker image to a live stream, the examiner respectfully disagrees. Applicant appears to be arguing that the “effect” disclosed by Zhang is not equivalent to the claimed “sticker image”. However, the claims contain no description of the “sticker image” therefore, the broadest reasonable interpretation of the limitation includes the effect images displayed in the live stream of Zhang. In response to applicant's argument that Zhang (US 2025/0088679) does not disclose a URL corresponding to the sticker image, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Zhang discloses obtaining effect identification information ([0061], [0078]). Wang (US 2019/0124400) discloses a URL corresponding to an image ([0174]-[0185]). The combination results in the effect of Zhang being identified with the URL of Wang. In response to applicant’s argument that Zhang does not disclose the newly added limitation of determining the frame based on image feature information of the sticker image, the examiner respectfully disagrees. Again, the claims contain no description of the nature of the “image feature information”. Therefore, the broadest reasonable interpretation of the limitation includes the time of the effect which is used to identify the frame timestamp. In response to applicant’s argument that Zhang does not disclose the display position information, the examiner respectfully disagrees. Zhang explicitly discloses fusing the effect to a target object location, a preset position, or a user designated position ([0067], [0068]). In response to applicant’s argument that Zhang does not disclose the frame, the examiner respectfully disagrees. Zhang explicitly discloses identifying the timestamp of the frame corresponding to the effect trigger ([0074], [0077]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 6, 7, 10, 15, 16, 19, 21, 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2025/0088679), herein Zhang, in view of Wang et al. (US 2019/0124400), herein Wang. Consider claim 1, Zhang clearly teaches an image processing method, applied to a first client, (Fig. 1) comprising: obtaining, in response to an operation of adding a sticker image to a live stream, a locator corresponding to the sticker image; (Fig. 1: In response to an effect trigger instruction during a live stream, first effect information, including effect identification information, is obtained, [0056], [0061], [0078].) determining a frame corresponding to the sticker image in the live stream based on determining that the frame comprises image feature information of the sticker image; (First time information corresponding to the effect trigger is determined and used to determine a frame corresponding to the effect in the live stream, [0074], [0076].) obtaining a frame identifier of the frame, wherein the frame identifier comprises image feature information of the frame; (Fig. 1: A timestamp comprising the time of the effect trigger is used to identify the frame, [0064], [0074], [0077].) determining display position information of the sticker image in the frame; (Fig. 1: First effect information includes location information of a target object or a preset position for displaying the effect, [0067], [0068].) and sending a sticker addition information to a second client that is viewing the live stream, wherein the sticker addition information comprises the frame identifier, the locator, and the display position information. (Fig. 1: The first effect information, including the effect identification information, first image frame timestamp, and location information for displaying the effect, is sent to the first viewing client, [0069], [0070], [0077].) However, Zhang does not explicitly teach a uniform resource locator (URL) corresponding to the sticker image; sending the URL. In an analogous art, Wang, which discloses a system for video distribution, clearly teaches a uniform resource locator (URL) corresponding to the sticker image; sending the URL. (Fig. 13: A URL of an image is generated and sent to the client devices, [0174]-[0185].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the system of Zhang by a uniform resource locator (URL) corresponding to the sticker image; sending the URL, as taught by Wang, to achieve the predictable result of allowing the client device to retrieve the effect. Consider claim 2, Zhang combined with Wang clearly teaches the determining a frame corresponding to the sticker image in the live stream comprises: detecting whether each live streaming video frame in the live stream contains the sticker image; and in response to that a live streaming video frame contains the sticker image, determining the live streaming video frame as the frame. (Fig. 1: First time information corresponding to the effect trigger operation is used to determine the first image frame and the client device displays the effect based on the frame information, [0064], [0072], [0074] Zhang.) Consider claim 4, Zhang combined with Wang clearly teaches determining the display position information of the sticker image in the frame comprises: identifying a target reference identifier area in the frame that meets a preset selection condition; and determining relative position information of the sticker image relative to the target reference identifier area as the display position information. (Fig. 4: The first effect information may be fused with a target object in the live stream content, [0067], [0071], [0098]-[0104] Zhang.) Consider claim 6, Zhang clearly teaches an image processing method, (Fig. 1) comprising: extracting, in response to a sticker addition information sent by a server, a frame identifier comprising image feature information of a frame, a locator, and display position information from the sticker addition message information; (Fig. 1: In response to an effect trigger instruction first effect information, including effect identification information, first image frame timestamp, and location information for displaying the effect, is obtained, [0061], [0064], [0067], [0068], [0074], [0078].) obtaining a sticker image based on the locator, and determining the target in a viewing video stream based on the frame identifier; and displaying the sticker image in the frame based on the display position information. (Fig. 1: The first effect information, including the effect identification information, first image frame, and location information for displaying the effect, is sent to the first client and the effect is displayed based on the first effect information, [0069]-[0072].) However, Zhang does not explicitly teach extracting a URL and obtaining an image based on the URL. In an analogous art, Wang, which discloses a system for video distribution, clearly teaches extracting a URL and obtaining an image based on the URL. (Fig. 13: A URL of an image is generated and sent to the client devices, [0174]-[0185].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the system of Zhang by extracting a URL and obtaining an image based on the URL, as taught by Wang, to achieve the predictable result of allowing the client device to retrieve the effect. Consider claim 7, Zhang combined with Wang clearly teaches the determining frame based on the frame identifier comprises: obtaining a viewing video frame identifier of each viewing video frame in the viewing video stream; and performing matching between the frame identifier and the viewing video frame identifier, and determining a successfully matched viewing video frame as the frame. (Fig. 1: First time information corresponding to the effect trigger operation is used to determine the first image frame and the client device displays the effect based on the frame information, [0064], [0072], [0074] Zhang.) Consider claim 10, Zhang combined with Wang clearly teaches displaying the sticker image in the frame based on the display position information comprises adding the sticker image to the frame based on the display position information by: in response to that the display position information comprises relative position information of the sticker image relative to a reference identifier area in a frame, identifying the reference identifier area in the viewing video frame; and displaying the sticker image in the frame based on the relative position information and the reference identifier area in the viewing video frame. (Fig. 4: The first effect information may be fused with a target object in the live stream content, [0067], [0071], [0098]-[0104] Zhang.) Consider claim 15, Zhang clearly teaches an electronic device, comprising: a processor; and a memory configured to store instructions executable by the processor, wherein the processor is configured to read the executable instructions from the memory, and execute the executable instructions to implement an image processing method (Fig. 14, [0167]) comprising: obtaining by a first client, in response to an operation of adding a sticker image to a live stream, a locator corresponding to the sticker image; (Fig. 1: In response to an effect trigger instruction during a live stream, first effect information, including effect identification information, is obtained, [0056], [0061], [0078].) determining a frame corresponding to the sticker image in the live stream based on determining that the frame comprises image feature information of the sticker image; (First time information corresponding to the effect trigger is determined and used to determine a frame corresponding to the effect in the live stream, [0074], [0076].) obtaining a frame identifier of the frame, wherein the frame identifier comprises image feature information of the frame; (Fig. 1: A timestamp comprising the time of the effect trigger is used to identify the frame, [0064], [0074], [0077].) determining display position information of the sticker image in the frame; (Fig. 1: First effect information includes location information of a target object or a preset position for displaying the effect, [0067], [0068].) and sending a sticker addition information to a second client that is viewing the live stream, wherein the sticker addition information comprises the frame identifier, the locator, and the display position information. (Fig. 1: The first effect information, including the effect identification information, first image frame timestamp, and location information for displaying the effect, is sent to the first viewing client, [0069], [0070], [0077].) However, Zhang does not explicitly teach a uniform resource locator (URL) corresponding to the sticker image; sending the URL. In an analogous art, Wang, which discloses a system for video distribution, clearly teaches a uniform resource locator (URL) corresponding to the sticker image; sending the URL. (Fig. 13: A URL of an image is generated and sent to the client devices, [0174]-[0185].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the system of Zhang by a uniform resource locator (URL) corresponding to the sticker image; sending the URL, as taught by Wang, to achieve the predictable result of allowing the client device to retrieve the effect. Consider claim 16, Zhang clearly teaches a non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program is configured to perform an image processing method (Fig. 14, [0167]) comprising: obtaining by a first client, in response to an operation of adding a sticker image to a live stream, a locator corresponding to the sticker image; (Fig. 1: In response to an effect trigger instruction during a live stream, first effect information, including effect identification information, is obtained, [0056], [0061], [0078].) determining a frame corresponding to the sticker image in the live stream based on determining that the frame comprises image feature information of the sticker image; (First time information corresponding to the effect trigger is determined and used to determine a frame corresponding to the effect in the live stream, [0074], [0076].) obtaining a frame identifier of the frame, wherein the frame identifier comprises image feature information of the frame; (Fig. 1: A timestamp comprising the time of the effect trigger is used to identify the frame, [0064], [0074], [0077].) determining display position information of the sticker image in the frame; (Fig. 1: First effect information includes location information of a target object or a preset position for displaying the effect, [0067], [0068].) and sending a sticker addition information to a second client that is viewing the live stream, wherein the sticker addition information comprises the frame identifier, the locator, and the display position information. (Fig. 1: The first effect information, including the effect identification information, first image frame timestamp, and location information for displaying the effect, is sent to the first viewing client, [0069], [0070], [0077].) However, Zhang does not explicitly teach a uniform resource locator (URL) corresponding to the sticker image; sending the URL. In an analogous art, Wang, which discloses a system for video distribution, clearly teaches a uniform resource locator (URL) corresponding to the sticker image; sending the URL. (Fig. 13: A URL of an image is generated and sent to the client devices, [0174]-[0185].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the system of Zhang by a uniform resource locator (URL) corresponding to the sticker image; sending the URL, as taught by Wang, to achieve the predictable result of allowing the client device to retrieve the effect. Consider claim 19, Zhang combined with Wang clearly teaches the determining a frame corresponding to the sticker image in the live stream comprises: detecting whether each live streaming video frame in the live stream contains the sticker image; and in response to that a live streaming video frame contains the sticker image, determining the live streaming video frame as the frame. (Fig. 1: First time information corresponding to the effect trigger operation is used to determine the first image frame and the client device displays the effect based on the frame information, [0064], [0072], [0074] Zhang.) Consider claim 21, Zhang combined with Wang clearly teaches determining display position information of the sticker image in the frame comprises: identifying a target reference identifier area in the frame that meets a preset selection condition; and determining relative position information of the sticker image relative to the target reference identifier area as the display position information. (Fig. 4: The first effect information may be fused with a target object in the live stream content, [0067], [0071], [0098]-[0104] Zhang.) Consider claim 22, Zhang combined with Wang clearly teaches the determining a frame corresponding to the sticker image in the live stream comprises: detecting whether each live streaming video frame in the live stream contains the sticker image; and in response to that a live streaming video frame contains the sticker image, determining the live streaming video frame as the frame. (Fig. 1: First time information corresponding to the effect trigger operation is used to determine the first image frame and the client device displays the effect based on the frame information, [0064], [0072], [0074] Zhang.) Consider claim 24, Zhang combined with Wang clearly teaches the sending a sticker addition information comprises: sending the sticker addition information to a server, which forwards the sticker addition information to the second client for displaying the sticker image. (Fig. 9: First effect information is transmitted to the client via a server, [0133]-[0138] Zhang.) Claims 3, 5, 8, 9, 11, 12, 20 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2025/0088679) in view of Wang et al. (US 2019/0124400) in view of Mu (US 2018/0098028). Consider claim 3, Zhang combined with Wang clearly teaches determining the display position information of the sticker image in the frame. (Fig. 1: First effect information includes location information for displaying the effect, [0067], [0068] Zhang.) However, Zhang combined with Wang does not explicitly teach determining first display coordinate information of the sticker image in a live streaming video display area of the frame; determining first display size information of the live streaming video display area; determining coordinate proportion information based on the first display coordinate information and the first display size information; and determining the display position information based on the coordinate proportion information. In an analogous art, Mu, which discloses a system for video distribution, clearly teaches: determining first display coordinate information of the sticker image in a live streaming video display area of the frame; (Fig. 2: The first position where the user taps the screen is determined, [0033].) determining first display size information of the live streaming video display area; (The screen of the first terminal is divided into a 100x100 grid, [0040].) determining coordinate proportion information based on the first display coordinate information and the first display size information; (When a user performs an operation in a grid whose coordinates are (50, 50), then (50, 50) is determined as the first position, [0040], [0080].) and determining the display position information based on the coordinate proportion information. (The screen of the second terminal is divided into a 100x100 grid and the second position is determined to be the coordinates (50,50), [0041], [0081].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the system of Zhang combined with Wang by determining first display coordinate information of the sticker image in a live streaming video display area of the frame; determining first display size information of the live streaming video display area; determining coordinate proportion information based on the first display coordinate information and the first display size information; and determining the display position information based on the coordinate proportion information, as taught by Mu, for the benefit of adapting to different terminal screen sizes. Consider claim 5, Zhang combined with Mu clearly teaches before sending the sticker addition information, obtaining second display size information of the sticker image in the frame; and updating the sticker addition information based on the second display size information. (Fig. 6: The size of the target virtual article is determined to increase based on the user input, [0045], [0047], [0048] Mu.) Consider claim 8, Zhang combined with Mu clearly teaches displaying the sticker image in the frame based on the display position information comprises adding the sticker image to the target frame based on the display position information (Fig. 1: First effect information includes location information for displaying the effect, [0067], [0068] Zhang.) by: in response to that the display position information comprises coordinate proportion information between coordinates of the sticker image and a size of a corresponding live streaming video display area, (The screen of the first terminal is divided into a 100x100 grid, when a user performs an operation in a grid whose coordinates are (50, 50), then (50, 50) is determined as the first position, [0040], [0080] Mu.) obtaining third display size information of a viewing video display area corresponding to the frame; determining second display coordinate information based on the third display size information and the coordinate proportion information; and displaying the sticker image in the frame based on the second display coordinate information. (The screen of the second terminal is divided into a 100x100 grid and the second position is determined to be the coordinates (50,50), [0041], [0081] Mu.) Consider claim 9, Zhang combined with Mu clearly teaches the coordinate proportion information is determined based on first display coordinate information of the sticker image in a live streaming video display area of a frame and first display size information of a live streaming video display area of a live streaming client. (The screen of the first terminal is divided into a 100x100 grid, when a user performs an operation in a grid whose coordinates are (50, 50), then (50, 50) is determined as the first position, [0040], [0080] Mu.) Consider claim 11, Zhang combined with Mu clearly teaches in response to that the sticker addition information further comprises fourth display size information of the sticker image, before the displaying the display information of the sticker image in the frame based on the display information, adjusting size information of the sticker image based on the fourth display size information. (Fig. 6: The size of the target virtual article can be increased, [0045], [0047], [0048] Mu.) Consider claim 12, Zhang combined with Mu clearly teaches the fourth display size information is display size information of the sticker image in the frame. (Fig. 6: The size of the target virtual article can be increased, [0045], [0047], [0048] Mu.) Consider claim 20, Zhang combined with Mu clearly teaches determining display information of the sticker image in the frame (Fig. 1: First effect information includes location information for displaying the effect, [0067], [0068] Zhang.) comprises: determining first display coordinate information of the sticker image in a live streaming video display area of the frame; (Fig. 2: The first position where the user taps the screen is determined, [0033] Mu.) determining first display size information of the live streaming video display area; (The screen of the first terminal is divided into a 100x100 grid, [0040] Mu.) determining coordinate proportion information based on the first display coordinate information and the first display size information; (When a user performs an operation in a grid whose coordinates are (50, 50), then (50, 50) is determined as the first position, [0040], [0080] Mu.) and determining the display position information based on the coordinate proportion information. (The screen of the second terminal is divided into a 100x100 grid and the second position is determined to be the coordinates (50,50), [0041], [0081] Mu.) Consider claim 23, Zhang combined with Mu clearly teaches determining display position information of the sticker image in the frame (Fig. 1: First effect information includes location information for displaying the effect, [0067], [0068] Zhang.) comprises: determining first display coordinate information of the sticker image in a live streaming video display area of the frame; (Fig. 2: The first position where the user taps the screen is determined, [0033] Mu.) determining first display size information of the live streaming video display area; (The screen of the first terminal is divided into a 100x100 grid, [0040] Mu.) determining coordinate proportion information based on the first display coordinate information and the first display size information; (When a user performs an operation in a grid whose coordinates are (50, 50), then (50, 50) is determined as the first position, [0040], [0080] Mu.) and determining the display position information based on the coordinate proportion information. (The screen of the second terminal is divided into a 100x100 grid and the second position is determined to be the coordinates (50,50), [0041], [0081] Mu.) Conclusion In the case of amending the claimed invention, applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN R SCHNURR whose telephone number is (571)270-1458. The examiner can normally be reached M-F 6a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN R SCHNURR/ Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Nov 03, 2025
Non-Final Rejection — §103
Feb 04, 2026
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593962
ENDOSCOPE SYSTEM AND COORDINATE SYSTEM CORRECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598359
DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12587703
VIDEO DISPLAY SYSTEM, OBSERVATION DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12587729
Method And System For A Trail Camera With Modular Fresnel Lenses
2y 5m to grant Granted Mar 24, 2026
Patent 12579603
IMAGE PROJECTION DEVICE AND METHOD FOR OPERATING THE SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
83%
With Interview (+10.8%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 943 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month