DETAILED ACTION Claims 1- 20 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (hereinafter “Li”) CN111901658A (paragraph and figure references are based on the English language translation submitted by the Applicant with the 12/21/2023 IDS) , and further in view of Latulipe et al. U.S. Patent 10,079,039 (hereinafter “Latulipe”). Referring to claim 1 , Li teaches a video processing method, applied to a terminal of a comment sender, wherein the method comprises: in response to a comment data input instruction for a target video, acquiring a comment material (based on a user’s comment operation, obtain comment information for the video , such as the input of a text comment shown in Figure 5 ) ( Li: paragraphs [0066]-[0069]) ; and in response to a request instruction for sending comment data, if the comment data has been confirmed, presenting the comment data during a playing process of the target video, wherein the comment data comprises the comment material (after the user has confirmed the comment by setting the position, time, style, etc. of the comment, the text comment is displayed in the video playback interface in response to a publishing operation from the user ) ( Li: paragraphs [0004]-[0005], [0066]-[0069], [0076], [0132]-[0135], [0169] and [0178]-[0180]; further shown in Figures 3-5, 20 and 22). Li teaches all of the claimed features except “wherein the comment material comprises an image ” . Li teaches that the comment material comprises text. Similar to Li, Latulipe also teaches acquiring a comment material in response to a comment data input instruction for a target video (in response to user selection of the “Add Comment” button in Figure 6, obtain comment information such as text for the video) (Latulipe: column 13, lines 22-44 and column 15, lines 1-16). In addition, Latulipe teaches that the comment material comprises an image (the comment can be text, sketch, voice, video or any combination of these modalities) (Latulipe: column 15, lines 3-6). Because both Li and Latulipe teach comment materials, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute one type of comment material for another to achieve the predictable result of allowing different types of comments to be added to a video, in this case, image comments. Referring to claim 2 , Li , as modified, teaches the method according to claim 1, wherein the in response to a request instruction for sending comment data, if the comment data has been confirmed, presenting the comment data in a playing process of the target video comprises: in response to the request instruction for sending the comment data, sending the comment data to a terminal of a target video poster, so that the target video poster confirms the comment data (as shown in Figures 15-17 for example, the comment is sent to the producer/author of the target video in order for the producer/author to confirm, i.e. “flip” the comment) ( Li: paragraphs [0139] and [0141]-[0146]); and if the comment data has been confirmed, presenting the comment data in the playing process of the target video (as shown in Figures 15-17 for example, if the producer/author confirms the comment by flipping it or by not deleting it, then the comment can be displayed) ( Li: paragraphs [0139], [0141]-[0146] and [0169]; this is further shown in Figure 20). Referring to claim 3 , Li , as modified, teaches the method according to claim 1, wherein the comment data further comprises attribute information of the comment material (the comment includes setting information such as display position, time, duration, style, transparency, animation, etc.) ( Li: paragraphs [0076]-[0077], [0086]-[0087], [0125]-[0131] and [0149]-[0150]); and after the acquiring a comment material, the method comprises: displaying an editing interface for the comment material (for example, Figure 12 shows an editing interface with a progress bar that can be toggled to set attribute information such as display time period for the comment) ( Li: paragraphs [0086]-[0101]); and in response to a setting instruction for the comment material in the editing interface, setting the attribute information of the comment material in the target video according to the setting instruction (in response to input instruction in the editing interface, such as toggle operations for the playback time, attribute information such as the display time period for the comment can be set) ( Li: paragraphs [0086]-[0101]). Referring to claim 4 , Li , as modified, teaches the method according to claim 3, wherein setting the attribute information of the comment material in the target video according to the setting instruction comprises: setting at least one of a style, position, time information, and motion trajectory of the comment material in the target video according to the setting instruction (the comment includes setting information such as display position, time, duration, style, transparency, animation, etc.) ( Li: paragraphs [0076]-[0077], [0086]-[0087], [0125]-[0131] and [0149]-[0150]). Referring to claim 5 , Li , as modified, teaches the method according to claim 4, wherein setting the style and/or position of the comment material in the target video according to the setting instruction, comprises: displaying a video picture of the target video at a target moment in the editing interface, and adding a canvas on the video picture, wherein the canvas is used for displaying the comment material (as shown in Figures 4-5 for example, a canvas/input box for displaying the comment is displayed on top of a video image) ( Li: paragraph [0069]); and in response to a setting instruction for a style and/or position of the canvas, determining the style and/or position of the comment material displayed on the canvas (the user can adjust the style and the position of the comment input box) ( Li: paragraph [0150]). Referring to claim 6 , Li, as modified, teaches wherein setting the time information of the comment material in the target video according to the setting instruction, comprises: displaying a video track of the target video and a comment track of the comment material in the editing interface (the editing interface shown in Figure 6 includes a video timeline bar 20 and a segment timeline bar 30 that is associated with comments) (Latulipe: column 13, lines 22-60); in response to a dragging instruction for the comment track of the comment material, determining a start point and an end point of the comment track of the comment material (the user can slide the initial position handle and final position handle along the segment timeline bar to associate with the comment) (Latulipe: column 13, lines 22-60 and column 15, lines 17-36); and determining the time information of the comment material in the target video according to a correspondence relationship between the video track of the target video and the comment track of the comment material (the comment corresponding to the segment with the initial position handle and final position handle is stored in association with a start and end time for the video (Latulipe: column 13, lines 22-60 and column 15, lines 12-36). Referring to claim 7 , Li teaches the method according to claim 4, wherein the setting the motion trajectory of the comment material in the target video according to the setting instruction, comprises: displaying a video picture of the target video at a target moment in the editing interface, and adding a canvas on the video picture, wherein the canvas is used for displaying the comment material (as shown in Figures 4-5 for example, a canvas/input box for displaying the comment is displayed on top of a video image) ( Li: paragraph [0069]); displaying a video track or a progress bar of the target video in the editing interface (as shown in Figure 12 for example, a progress bar for the video is displayed) ( Li: paragraphs [0087]-[0091]; and in response to a setting instruction for a motion trajectory on the canvas, determining canvas positions corresponding to different time points in the video track or the progress bar to determine the motion trajectory of the comment material (the user can edit the comment by dragging the comment input box to different locations at a particular time in the video) ( Li: paragraphs [0076]-[0101]). Referring to claim 8 , Li, as modified, teaches the method according to claim 6, wherein determining the time information of the comment material in the target video according to a correspondence relationship between the video track of the target video and the comment track of the comment material comprises: according to the correspondence relationship between the video track of the target video and the comment track of the comment material, establishing a mapping relationship between each frame of picture of the comment material and a timeline of the target video, or establishing a mapping relationship between each frame of picture of the comment material and a video frame of the target video, or establishing a mapping relationship between a timeline of the comment material and a timeline of the target video (the video timeline bar and segment timeline bar 30 are associated with each other, as shown in Figure 6; for example, comment markers associated with the start and end of the segment comment are displayed on the video timeline bar) (Latulipe: column 13, lines 22-60, column 15, lines 12-36 and column 16, lines 47-55). Referring to claim 9 , Li teaches the method according to claim 2, wherein the method further comprises: receive notification information sent by the terminal of the target video poster after the target video poster has confirmed the comment data, and displaying the notification information (if the producer/author of the video confirms the comment by flipping it, a message of “Your barrage has been flipped” can be sent to the viewer) ( Li: paragraph [0169]). Referring to claim 10 , Li , as modified, teaches the method according to claim 1, wherein the image comprises at least one of a static picture, a dynamic picture, or a video clip (the comment can be text, sketch, voice, video or any combination of these modalities) (Latulipe: column 15, lines 3-6) . Referring to claim 11 , Li teaches a video processing method, applied to a terminal of a target video poster, wherein the method comprises: receiving comment data sent by a terminal of a comment sender, wherein the comment data comprises a comment material (the terminal of the comment sender can send comment information to the terminal of the producer/author of the video) ( Li: paragraphs [0139]-[0148] and [0169]; further shown in Figure 20); and in response to a confirming instruction for the comment data, adding the comment data into the target video (as shown in Figures 15-17 for example, the producer/author of the video can confirm the comment by flipping it; the comment flipped by the producer/author is synchronized and displayed with the video) ( Li: paragraphs [0139]-[0148] and [0169]; further shown in Figure 20). Li teaches all of the claimed features except that the comment material comprises an image. Li teaches that the comment material comprises text. Similar to Li, Latulipe also teaches adding comment data into a target video (in response to user selection of the “Add Comment” button in Figure 6, a comment can be added to the video) (Latulipe: column 13, lines 22-44 and column 15, lines 1-16). In addition, Latulipe teaches that the comment material comprises an image (the comment can be text, sketch, voice, video or any combination of these modalities) (Latulipe: column 15, lines 3-6). Because both Li and Latulipe teach comment materials, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute one type of comment material for another to achieve the predictable result of allowing different types of comments to be added to a video, in this case, image comments. Referring to claim 12 , Li , as modified, teaches the method according to claim 11, wherein the method further comprises: in response to a comment preview instruction, displaying a preview interface (the terminal of the producer/author of the video can provide a preview of the comment information) ( Li: paragraphs [0140]-[0144]), displaying the comment material in the target video in the preview interface according to the comment data (the preview of the comment information is displayed according to attributes such as position and playing time of the comment) ( Li: paragraphs [0140]-[0144]), and displaying presentation time information of the comment material on a progress bar corresponding to the target video (the image sequence at the time corresponding to the comment is highlighted on the progress bar shown in Figure 12) ( Li: paragraphs [0091]-[0092]). Referring to claim 13 , Li , as modified, teaches the method according to claim 11, wherein the method further comprises: if comment data sent by terminals of a plurality of different comment senders has been received, screening and/or sequencing and presenting the comment data according to a predetermined policy (if comment data has been sent by multiple users for the same time period, i.e. overlapping comment information in the playback time period, the comments can be flipped and presented based on a policy such as only allowing a target number of comments to be displayed) ( Li: paragraph [0145]). Referring to claim 14 , Li , as modified, teaches the method according to claim 13, wherein the method further comprises: adding the comment data into the target video according to an association degree between the comment material and the target video (the comment data is displayed with the video during playback based on an association degree so that comment information posted by the local user is displayed at the second display position and comment information posted by other users are displayed around the comment by the local user in a dispersed manner) ( Li: paragraph s [0154]-[0155]). Referring to claim 15 , Li , as modified, teaches the method according to claim 11, wherein the method further comprises: in response to a comment data modification instruction, modifying attribute information of the comment material in the comment data; or sending a modification request performed on the comment material and/or the attribute information of the comment material to the terminal of the comment sender, so that the comment sender modifies the comment data according to the modification request (the producer/author can delete the comment in response to a comment deletion request) ( Li: paragraph [0169]). Referring to claim 16 , Li , as modified, teaches the method according to claim 11, wherein the image comprises at least one of a static picture, a dynamic picture, or a video clip (the comment can be text, sketch, voice, video or any combination of these modalities) (Latulipe: column 15, lines 3-6) ). Referring to claim 17 , Li teaches an electronic device, comprising: at least one processor, a memory, and a display unit, wherein the memory stores computer-executable instructions (paragraphs [0217]-[0224]; further shown in Figures 1 and 23); and the at least one processor executes the computer-executable instructions stored in the memory to cause the at least one processor to: in response to a comment data input instruction for a target video, acquiring a comment material (based on a user’s comment operation, obtain the comment information for the video) ( Li: paragraphs [0066]-[0069]); and in response to a request instruction for sending comment data, if the comment data has been confirmed, presenting the comment data during a playing process of the target video, wherein the comment data comprises a comment material (after the user has confirmed the comment by setting the position, time, style, etc. of the comment, the comment is displayed in the video playback interface in response to a publishing operation from the user ) ( Li: paragraphs [0004]-[0005], [0066]-[0069], [0076], [0132]-[0135], [0169] and [0178]-[0180]; further shown in Figures 3-5, 20 and 22). Li teaches all of the claimed features except that the comment material comprises an image. Li teaches that the comment material comprises text. Similar to Li, Latulipe also teaches acquiring a comment material in response to a comment data input instruction for a target video (in response to user selection of the “Add Comment” button in Figure 6, obtain comment information such as text for the video) (Latulipe: column 13, lines 22-44 and column 15, lines 1-16). In addition, Latulipe teaches that the comment material comprises an image (the comment can be text, sketch, voice, video or any combination of these modalities) (Latulipe: column 15, lines 3-6). Because both Li and Latulipe teach comment materials, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute one type of comment material for another to achieve the predictable result of allowing different types of comments to be added to a video, in this case, image comments. Referring to claim 18 , Li , as modified, teaches the electronic device according to claim 17, wherein the image comprises at least one of a static picture, a dynamic picture, or a video clip (the comment can be text, sketch, voice, video or any combination of these modalities) (Latulipe: column 15, lines 3-6) . Referring to claim 19 , Li , as modified, teaches the electronic device according to claim 17, wherein the computer-executable instructions to cause the at least one processor to: in response to a request instruction for sending comment data, if the comment data has been confirmed, presenting the comment data in a playing process of the target video comprise the computer-executable instructions to cause the at least one processor to: in response to the request instruction for sending the comment data, sending the comment data to a terminal of a target video poster, so that the target video poster confirms the comment data (as shown in Figures 15-17 for example, the comment is sent to the producer/author of the target video in order for the producer/author to confirm, i.e. “flip” the comment) ( Li: paragraphs [0139] and [0141]-[0146]); and if the comment data has been confirmed, presenting the comment data in the playing process of the target video (as shown in Figures 15-17 for example, if the producer/author confirms the comment by flipping it or by not deleting it, then the comment can be displayed) ( Li: paragraphs [0139], [0141]-[0146] and [0169]; this is further shown in Figure 20). Referring to claim 20 , Li , as modified, teaches the electronic device according to claim 17, wherein the comment data further comprises attribute information of the comment material (the comment includes setting information such as display position, time, duration, style, transparency, animation, etc.) ( Li: paragraphs [0076]-[0077], [0086]-[0087], [0125]-[0131] and [0149]-[0150]); and after the acquiring a comment material, the computer-executable instructions further comprise computer-executable instructions to cause the at least processor to: display an editing interface for the comment material (for example, Figure 12 shows an editing interface with a progress bar that can be toggled to set attribute information such as display time period for the comment) ( Li: paragraphs [0086]-[0101]); and in response to a setting instruction for the comment material in the editing interface, set the attribute information of the comment material in the target video according to the setting instruction (in response to input instruction in the editing interface, such as toggle operations for the playback time, attribute information such as the display time period for the comment can be set) ( Li: paragraphs [0086]-[0101]). The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. The documents cited therein ( US20160261919 , US20190206408 , US20180130094 ) teach similar methods of allowing a user to input comment data for a video and presenting the comment data during a playing process of the video. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT TING ZHOU LEE whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4058 . The examiner can normally be reached on Monday – Thursday 9AM – 1PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT (571) 27 2-4057 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TING Z LEE/ Primary Examiner, Art Unit 2171