Prosecution Insights
Last updated: April 19, 2026
Application No. 18/572,461

RESOURCE PLAYING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Final Rejection §103
Filed
Dec 20, 2023
Examiner
NAZAR, AHAMED I
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
202 granted / 378 resolved
-1.6% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
29 currently pending
Career history
407
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 378 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is responsive to the amendment filed 12/10/2025. Claims 1, 13, and 14 have been amended, claim 12 has been canceled, and claim 21 has been added. In light of Applicant’s amendment, previous claim rejections based on 35 USC 112(b), with regard to claim 12, have been withdrawn. Claims 1-11 and 13-21 are pending with claims 1, 13, and 14 as independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2022/0264053, filed 10/30/2019, hereinafter as Wang) in view of Deng (US 2023/0224545, filed 6/10/2020). Claim 1. A resource playing method, comprising: in response to [a triggering] for a first draft resource identification, displaying a first draft resource on a playing interface; and switching the first draft resource to a second draft resource on the playing interface in response to a first preset triggering on the playing interface, Wang discloses in [0029] “A picture taken by the first camera forms the first video stream, and a picture taken by the second camera forms the second video stream. During video recording, the two video files may be independent, that is, they do not interfere with each other.” And in [0036-0039] “the terminal simultaneously displays the first video stream and the second video stream in a preset mode (for example, picture in picture or split screen). In some embodiments, the terminal simultaneously displaying the first video stream and the second video stream may include: determining display area information of the first video stream and the second video stream, and then displaying the first video stream in a first area and the second video stream in a second area according to the display area information… The timeline 23 may also show a switching time point of the first video stream 21 and the second video stream 22, that is, show the time point at which the video streams are switched in the second area. For example, in the display area where the first video stream 21 of FIG. 2 is located, the first video stream 21 is displayed in a first time period, the second video stream 22 is displayed in a second time period, and the first video stream 21 is displayed in a third time period. As the display areas are switched, the timeline 23 may record time periods during which the first video stream 21 is displayed in the area, and time periods during which the second video stream 22 is displayed.” And in [0041-0043] “if displayed in a picture-in-picture manner, a smaller picture may be adsorbed onto four corners of a larger picture, for example, as shown in FIG. 2. In some embodiments, a position of the smaller picture is realized by intelligent recognition, that is, the smaller picture is adsorbed onto a position that has little influence on the display of the larger picture, for example, a position where people and scenery are scarce in the larger picture or a blank location… during video recording, the display areas of the first video stream 21 and the second video stream 22 may be switched in response to a preset operation. For example, the preset operation may include a click trigger, a slide trigger, a voice command, etc., on the display areas of the first video stream 21 and the second video stream 22. FIG. 4 and FIG. 5 respectively show the display areas of the first video stream 21 and the second video stream 22 corresponding to FIGS. 2 and 3 after switching.” (emphasis added) examiner note: video stream 21 and video stream 22 may be the first resource draft and the second resource draft because the two video streams are being in the creation stage or during video recording by front and back cameras of a terminal. The switching module 602 is configured to receive a switching command and perform a preset switching operation on the first video stream and the second video stream according to the switching command. The picture taken by the first camera may generate the first video stream identification. wherein the first draft resource and the second draft resource are acquired based on resource files, [code files], and configuration files which are stored. Wang discloses in [0035] “During video recording, one audio file and two video files are obtained in the same time period, wherein the audio file and two video files may be independent, that is, they do not interfere with each other.” And in [0055] “a video processing device 600, which includes a camera enabling module 601, a switching module 602, a recording module 603 and a timeline generation module 604. The camera enabling module 601 is configured to turn on a first camera located at a first side of a terminal and turn on a second camera located at a second side of the terminal, so as to obtain a first video stream through the first camera and a second video stream through the second camera. The switching module 602 is configured to receive a switching command and perform a preset switching operation on the first video stream and the second video stream according to the switching command. The recording module 603 is configured to record receiving time of the switching command.” (emphasis added) examiner note: the two video streams may be acquired based on two video (resource) files being recorded from front and back cameras of the terminal. The configuration files may be the switching module 602, which implements switching from the first video stream to the second video stream. the first draft resource and the second draft resource are [video clips stored in a collection of drafts] of a video application, and a draft resource in the collection is draft to be posted. Wang teaches in [0035] “It should be understood that not only may the audio file obtained during video recording be synthesized with the corresponding video file, but also other audio files and the captured video file may be synthesized and edited… other audio files include audio streams obtained by audio recording equipment, other audio files stored locally or audio files obtained from the network, etc.” And in [0048] “after video recording is completed, the obtained first video stream 21 and/or second video stream 22 may be exported or shared to obtain an exported video which may include the first video stream 21, the second video stream 22, a synthetic video with the first video stream 21 and the second video stream 22 or a picture-in-picture video with the first video stream 21 and the second video stream 22. That is, the exported video may be either a single video stream or a combined video of two video streams. In addition, these videos may be combined with audios in corresponding time periods. In the synthetic video with the first video stream 21 and the second video stream 22, the same video recording time point may include only one video stream, may also include two video streams, or a synthetic video corresponding to an audio stream obtained by synthesizing the first video stream 21 or the second video stream 22 with an audio file in the corresponding time period.” (emphasis added) examiner note: the first video file (first video clip) and the second video file (second video clip) can be streamed from storage local at the client device. Also, the term “during video recording” may indicate that the video recorded content has to be captured first before it can be edited such that the video stream 21 and video stream 22 have be recorded before editing to combine selected portions from the captured video streams (draft video files), which result in synthetic video that to be exported or shared, Wang does not explicitly disclose video clips stored in a collection of drafts. However, Toussi, in an analogous art, teaches in [P. 2] “events like team based sports might be distributed over a large area or could happen so fast (as it is in ice hockey and football) to be covered only by one camera; hence the need for the real-time coordination of several cameras is extremely felt [19]. In such multi-camera settings, each camera starts filming from a position that is defined by the director; their corresponding video streams are simultaneously transmitted to the production control room. There, the director by having multiple views of both live and pre-recorded items on an array of monitors, can manage a suitable selection and combination of streams to provide the spectators of the final broadcast with the best viewing experience.” (emphasis added) examiner note: as can be seen, captured video streams become pre-recorded items (video clips) or collection of drafts such that the director selects desired views the recorded items to generate final resulting video that can be broadcasted. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Toussi because “Employing this sort of ”capture-and-share-straightaway” [23] services allows people to instantly share their captured mobile images through manageable web pages instead of using emails, paper prints and web publishing [23,26]. Mobile phones in this way enhance a shared experience among the spectators of a live event. Moreover, in distributed events like car rally or bicycle racing, this experience will become even more enjoyable [16,18].” Toussi [Introduction]. Wang does not explicitly disclose in response to a triggering for a first draft resource identification and code files. Deng, also, teaches in [0033] “a video playing request is received, where the video playing request includes first video information. The video playing request indicates to play video data corresponding to the first video information through a target page.” And in [0058] “the preview video frames are played in the video preview page. The video frames of the parsed video data may be played through the video view, so that video data is played. The video preview page may be a hypertext markup language (HTML) page, and each node of the page is represented by a tree-like page structure.” (emphasis added) examiner note: the video playing request may trigger the first video stream to be displayed in the larger area such as the first video stream 21 in fig. 2, as taught by Wang. The code files may be video data configured in HTML video file format for display in a video preview page. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Deng because “for the same video, the playing progress of the video preview page is consecutive with that of the target page, effectively solving the problem of video discontinuity when switching from the preview page to the play page in the conventional technology, and improving the user experience.” Deng [0017]. Claim 2. The rejection of the method of claim 1 is incorporated, wherein, in response to the triggering for the first draft resource identification, displaying the first draft resource on the playing interface, comprises: Wang does not explicitly disclose acquiring the resource files, the code files, and the configuration files corresponding to the first draft resource in response to the triggering for the first draft resource identification; generating a video file corresponding to the first draft resource based on the resource files, the code files, and the configuration files; loading the video file to a memory. However, Deng, in an analogous art, teaches in [0033-0037] “The first video information in the video playing request may be the first video identifier. Based on the first video identifier, the video data corresponding to the first video identifier may be acquired… When the user clicks on a live streaming interface to select and enter a live room for watching, the first video information in the triggered video playing request may be the live room number of the live room clicked by the user. In another possible embodiment, the video address may be a uniform resource locator (URL) address… a video is played in the target page based on video playing progress information corresponding to the video preview page and video data loaded on the video preview page, in response to the first video information being the same as the second video information.” (emphasis added) examiner note: video data may be acquired utilizing video address and video data may be loaded to display memory of the video preview page, and Further, Deng teaches creating a player instance to play the first draft resource on the playing page interface based on the video file by the player instance; in [0055] “The video playing component may include a video player and a video view. For example, the video player may be LivePlayer, and the video view may be TextureView. The TextureView may be bound to any parent view. For example, the video data corresponding to the video preview page may be acquired based on the video player and the second video information.” (emphasis added). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Deng because “for the same video, the playing progress of the video preview page is consecutive with that of the target page, effectively solving the problem of video discontinuity when switching from the preview page to the play page in the conventional technology, and improving the user experience.” Deng [0017]. Claim 3. The rejection of the method of claim 1 is incorporated, wherein, in response to the triggering for the first draft resource identification, displaying the first draft resource on the playing interface, comprises: Wang does not explicitly disclose acquiring the resource files, the code files, and the configuration files corresponding to the first draft resource in response to the triggering for the first draft resource identification; However, Deng, in an analogous art, teaches in [0033-0037] “The first video information in the video playing request may be the first video identifier. Based on the first video identifier, the video data corresponding to the first video identifier may be acquired… When the user clicks on a live streaming interface to select and enter a live room for watching, the first video information in the triggered video playing request may be the live room number of the live room clicked by the user. In another possible embodiment, the video address may be a uniform resource locator (URL) address… a video is played in the target page based on video playing progress information corresponding to the video preview page and video data loaded on the video preview page, in response to the first video information being the same as the second video information.” (emphasis added) examiner note: video data may be acquired utilizing video address and video data may be loaded to display memory of the video preview page, creating a player instance, and transmitting the first associated parameter to the player instance; Deng also teaches in [0055] “The video playing component may include a video player and a video view. For example, the video player may be LivePlayer, and the video view may be TextureView. The TextureView may be bound to any parent view. For example, the video data corresponding to the video preview page may be acquired based on the video player and the second video information.” (emphasis added), and playing, by the player instance, the first draft resource that is configured on the playing interface based on the first associated parameter. Deng also teaches in [0057-0058] “the preview video may be played in the video preview page, which facilitates subsequent continuous playing of the video data in the target page, and improves the reusability of the video data. In addition, the data may be acquired and parsed with the video player in the video playing component based on the video address, so that the video is played based on the position of the video view in the page.” (emphasis added). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Deng because “for the same video, the playing progress of the video preview page is consecutive with that of the target page, effectively solving the problem of video discontinuity when switching from the preview page to the play page in the conventional technology, and improving the user experience.” Deng [0017]. Wang further teaches loading the resource files, the code files, and the configuration files to a first preset editor instance; in [0059] “Editing the video to be edited includes: modifying a playing effect identifier associated with the time period to obtain a video stream corresponding to a playing effect. For example, if a certain playing effect identifier is moved from a first time period to a second time period, if the video stream is output, the video stream corresponding to the second time period is output according to the playing effect.” (emphasis added). based on the resource files, the code files, and the configuration files, configuring the first draft resource to generate a first associated parameter of the first preset editor instance; Wang discloses in [0059] “Editing the video to be edited includes: modifying a playing effect identifier associated with the time period to obtain a video stream corresponding to a playing effect. For example, if a certain playing effect identifier is moved from a first time period to a second time period, if the video stream is output, the video stream corresponding to the second time period is output according to the playing effect.” (emphasis added) examiner note: modifying playing effect may be the first associated parameter, Claim 4. The rejection of the method of claim 3 is incorporated, wherein before loading the resource files, the code files, and the configuration files to the first preset editor instance, the method further comprises: loading the first preset editor instance to a memory, and initializing the first preset editor instance; Wang discloses in [0058-0059] “a video editing method is further provided, and includes: receiving a video to be edited, wherein the video to be edited includes at least two video streams and timeline information, and the timeline information indicates a corresponding relationship between receiving time of a switching operation on the at least two video streams and the switching operation.” (emphasis added) examiner note: a video editing method is provided may indicate loading and initializing a video editor instance, or acquiring the first preset editor instance from an instance pool in the memory, wherein the instance pool comprises at least two preset editor instances that are initialized. Claim 5. The rejection of the method of claim 4 is incorporated, wherein, while acquiring the first preset editor instance, the method further comprises: acquiring at least one second preset editor instance, and setting the sequence relationship between the at least one second preset editor instance and the first preset editor instance. Wang discloses in [0058-0059] “a video editing method is further provided, and includes: receiving a video to be edited, wherein the video to be edited includes at least two video streams and timeline information, and the timeline information indicates a corresponding relationship between receiving time of a switching operation on the at least two video streams and the switching operation… editing the video to be edited includes: resetting presentation modes of the at least two video streams. For example, a synthetic video may be changed to a picture-in-picture video and vice versa. In some embodiments, editing the video to be edited includes: adjusting the timeline to obtain another timeline. For example, the switching time point of the display area may be changed to obtain a new timeline, so as to allow the video to be presented in a different way.” (emphasis added) examiner note: a video editing method is provided may indicate loading and initializing a video editor instance. Claim 6. The rejection of the method of claim 5 is incorporated, switching the first draft resource to the second draft resource on the playing interface in response to the first preset triggering on the playing interface comprises: loading the resource files, the code files, and the configuration files corresponding to the second draft resource to the second preset editor instance; configuring the second draft resource based on the resource files, the code files, and the configuration files corresponding to the second draft resource, so as to generate a second associated parameter of the second preset editor instance; Wang discloses in [0058-0059] “a video editing method is further provided, and includes: receiving a video to be edited, wherein the video to be edited includes at least two video streams and timeline information, and the timeline information indicates a corresponding relationship between receiving time of a switching operation on the at least two video streams and the switching operation… editing the video to be edited includes: resetting presentation modes of the at least two video streams. For example, a synthetic video may be changed to a picture-in-picture video and vice versa. In some embodiments, editing the video to be edited includes: adjusting the timeline to obtain another timeline. For example, the switching time point of the display area may be changed to obtain a new timeline, so as to allow the video to be presented in a different way.” (emphasis added) examiner note: a video editing method is provided may indicate loading and initializing a video editor instance. . transmitting the second association parameter to the player instance; Wang discloses in [0058-0059] “editing the video to be edited includes: resetting presentation modes of the at least two video streams. For example, a synthetic video may be changed to a picture-in-picture video and vice versa. In some embodiments, editing the video to be edited includes: adjusting the timeline to obtain another timeline. For example, the switching time point of the display area may be changed to obtain a new timeline, so as to allow the video to be presented in a different way.” (emphasis added) examiner note: a video editing method is provided may indicate loading and initializing a video editor instance. Changing the timeline to obtain new timeline may be a second parameter, and Wang does not explicitly disclose playing, by the player instance, the second draft resource that is configured on the playing interface based on the second associated parameter. Deng also teaches in [0057-0058] “the preview video may be played in the video preview page, which facilitates subsequent continuous playing of the video data in the target page, and improves the reusability of the video data. In addition, the data may be acquired and parsed with the video player in the video playing component based on the video address, so that the video is played based on the position of the video view in the page.” (emphasis added). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Deng because “for the same video, the playing progress of the video preview page is consecutive with that of the target page, effectively solving the problem of video discontinuity when switching from the preview page to the play page in the conventional technology, and improving the user experience.” Deng [0017]. Claim 7. The rejection of the method of claim 6 is incorporated, wherein, before switching the first draft resource to the second draft resource on the playing interface, the method further comprises: displaying a preset picture on the playing interface in response to the first preset triggering on the playing interface, wherein the preset picture comprises a predetermined frame image of the second draft resource. Wang discloses in [0029-0033] “A picture taken by the first camera forms the first video stream, and a picture taken by the second camera forms the second video stream. During video recording, the two video files may be independent, that is, they do not interfere with each other… by displaying two videos through the timelines, users may know which video is displayed in a more direct manner, or know which area is for displaying a video corresponding to a timeline in a more direct manner”. (emphasis added). Claim 8. The rejection of the method of claim 3 is incorporated, wherein the resource files comprise video resource files and special effects resource files, and playing the first draft resource on the playing interface comprises: playing an original video and special effects added in the original video, wherein the original video is acquired through the video resource files, the special effects in the original video are acquired through the special effects resource files, and the special effect resource files and the video resource files are stored independently of each other. Wang discloses in [0009] “provided a video editing method, comprising: receiving a video to be edited, wherein the video to be edited comprises at least two video streams and timeline information, and the timeline information indicates a corresponding relationship between receiving time of a switching operation on the at least two video streams and the switching operation; and editing the video to be edited based on the timeline information and the at least two video streams.” And in [0059] “Editing the video to be edited includes: modifying a playing effect identifier associated with the time period to obtain a video stream corresponding to a playing effect… if a certain playing effect identifier is moved from a first time period to a second time period, if the video stream is output, the video stream corresponding to the second time period is output according to the playing effect.” (emphasis added). Claim 9. The rejection of the method of claim 3 is incorporated, wherein loading the code files and the configuration files to the first preset editor instance comprises: Wang does not explicitly disclose parsing the code files and the configuration files and transforming data structures, respectively, so as to obtain target data adapted to the first preset editor instance, and loading the target data to a memory. However, Deng, in an analogous art, teaches in [0056] “the video data corresponding to the video preview page is parsed to obtain preview video frames… the video data of the video preview page may be parsed based on the video player to obtain the preview video frames.” (emphasis added). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Deng because “for the same video, the playing progress of the video preview page is consecutive with that of the target page, effectively solving the problem of video discontinuity when switching from the preview page to the play page in the conventional technology, and improving the user experience.” Deng [0017]. Deng further discloses acquiring the target data from memory based on the first preset editor instance. Wang discloses in [0052] “performing preset processing on the first video stream and the second video stream includes: performing editing on the first video stream and the second video stream. In some embodiments, editing a video to be edited includes: resetting presentation modes of the at least two video streams.” (emphasis added). Claims 10 and 15. The rejection of the method of claim 1 is incorporated, wherein an edit control is displayed on the playing interface, and the method further comprises: displaying an editing interface matched with the editing control in response to a triggering on the editing control; and editing a draft resource played on the playing interface in response to a triggering on the editing interface. Wang discloses in [0029] “the video recording method is improved by using the cameras on two sides of the terminal for video recording, so that more flexible choices may be provided for video presentation and editing. By obtaining two video streams, more flexible choices may be provided for video presentation and editing in the later stage, so that users can implement various editing ideas.” And in [0051] “when users watch (e.g., play or preview) the synthetic video and picture-in-picture video, since the timeline is displayed in a manner of segments, the current video progress may be controlled by operating (e.g., clicking, dragging, etc.) the timeline, and the effect of quickly adjusting and selecting the watched video progress may also be achieved by previous-segment and next-segment options. For example, the user may click the position of the corresponding timeline to make the video quickly jump to be in the video progress at the corresponding time point.” (emphasis added). Claim 11. The rejection of the method of claim 1 is incorporated, wherein a posting control is displayed on the playing interface, and the method further comprises: in response to a triggering on the posting control, posting a draft resource played on the playing interface or switching to a video posting interface. Wang discloses in [0048] “after video recording is completed, the obtained first video stream 21 and/or second video stream 22 may be exported or shared to obtain an exported video which may include the first video stream 21, the second video stream 22, a synthetic video with the first video stream 21 and the second video stream 22 or a picture-in-picture video with the first video stream 21 and the second video stream 22.” And in [0090] “performing preset processing on the first video stream and the second video stream comprises: exporting the first video stream and/or the second video stream and the timeline information to obtain an exported video.” (emphasis added). Claim 12. The claim is directed toward a resource playing apparatus to implement the method of claim 1, therefore is similarly rejected as claim 1. Claim 13. The claim is directed toward an electronic device for implementing the method of claim 1, therefore, is similarly rejected as claim 1. Wang further teaches one or more processors; and a storage apparatus for storing one or more programs, in [0010] “provided a terminal, comprising: at least one memory and at least one processor; wherein the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the video processing method above.” (emphasis added). Claim 14. The claim is directed toward a computer-readable storage medium for implementing the method of claim 1, therefore, the claim is similarly rejected as claim 1. Claim 15. The rejection of the method of claim 2 is incorporated, wherein an edit control is displayed on the playing interface, and the method further comprises: displaying an editing interface matched with the editing control in response to a triggering on the editing control; and editing a draft resource played on the playing interface in response to a triggering on the editing interface. Wang discloses in [0029] “the video recording method is improved by using the cameras on two sides of the terminal for video recording, so that more flexible choices may be provided for video presentation and editing. By obtaining two video streams, more flexible choices may be provided for video presentation and editing in the later stage, so that users can implement various editing ideas.” And in [0051] “when users watch (e.g., play or preview) the synthetic video and picture-in-picture video, since the timeline is displayed in a manner of segments, the current video progress may be controlled by operating (e.g., clicking, dragging, etc.) the timeline, and the effect of quickly adjusting and selecting the watched video progress may also be achieved by previous-segment and next-segment options. For example, the user may click the position of the corresponding timeline to make the video quickly jump to be in the video progress at the corresponding time point.” (emphasis added). Claim 16. The rejection of the method of claim 3 is incorporated, wherein an edit control is displayed on the playing interface, and the method further comprises: displaying an editing interface matched with the editing control in response to a triggering on the editing control; and editing a draft resource played on the playing interface in response to a triggering on the editing interface. Wang discloses in [0029] “the video recording method is improved by using the cameras on two sides of the terminal for video recording, so that more flexible choices may be provided for video presentation and editing. By obtaining two video streams, more flexible choices may be provided for video presentation and editing in the later stage, so that users can implement various editing ideas.” And in [0051] “when users watch (e.g., play or preview) the synthetic video and picture-in-picture video, since the timeline is displayed in a manner of segments, the current video progress may be controlled by operating (e.g., clicking, dragging, etc.) the timeline, and the effect of quickly adjusting and selecting the watched video progress may also be achieved by previous-segment and next-segment options. For example, the user may click the position of the corresponding timeline to make the video quickly jump to be in the video progress at the corresponding time point.” (emphasis added). Claim 17. The rejection of the method of claim 4 is incorporated, wherein an edit control is displayed on the playing interface, and the method further comprises: displaying an editing interface matched with the editing control in response to a triggering on the editing control; and editing a draft resource played on the playing interface in response to a triggering on the editing interface. Wang discloses in [0029] “the video recording method is improved by using the cameras on two sides of the terminal for video recording, so that more flexible choices may be provided for video presentation and editing. By obtaining two video streams, more flexible choices may be provided for video presentation and editing in the later stage, so that users can implement various editing ideas.” And in [0051] “when users watch (e.g., play or preview) the synthetic video and picture-in-picture video, since the timeline is displayed in a manner of segments, the current video progress may be controlled by operating (e.g., clicking, dragging, etc.) the timeline, and the effect of quickly adjusting and selecting the watched video progress may also be achieved by previous-segment and next-segment options. For example, the user may click the position of the corresponding timeline to make the video quickly jump to be in the video progress at the corresponding time point.” (emphasis added). Claim 18. The rejection of the method of claim 2 is incorporated, wherein a posting control is displayed on the playing interface, and the method further comprises: in response to a triggering on the posting control, posting a draft resource played on the playing interface or switching to a video posting interface. [0048] “after video recording is completed, the obtained first video stream 21 and/or second video stream 22 may be exported or shared to obtain an exported video which may include the first video stream 21, the second video stream 22, a synthetic video with the first video stream 21 and the second video stream 22 or a picture-in-picture video with the first video stream 21 and the second video stream 22.” And in [0090] “performing preset processing on the first video stream and the second video stream comprises: exporting the first video stream and/or the second video stream and the timeline information to obtain an exported video.” (emphasis added). Claim 19. The rejection of the method of claim 3 is incorporated, wherein a posting control is displayed on the playing interface, and the method further comprises: in response to a triggering on the posting control, posting a draft resource played on the playing interface or switching to a video posting interface. [0048] “after video recording is completed, the obtained first video stream 21 and/or second video stream 22 may be exported or shared to obtain an exported video which may include the first video stream 21, the second video stream 22, a synthetic video with the first video stream 21 and the second video stream 22 or a picture-in-picture video with the first video stream 21 and the second video stream 22.” And in [0090] “performing preset processing on the first video stream and the second video stream comprises: exporting the first video stream and/or the second video stream and the timeline information to obtain an exported video.” (emphasis added). Claim 20. The rejection of the method of claim 4 is incorporated, wherein a posting control is displayed on the playing interface, and the method further comprises: in response to a triggering on the posting control, posting a draft resource played on the playing interface or switching to a video posting interface. [0048] “after video recording is completed, the obtained first video stream 21 and/or second video stream 22 may be exported or shared to obtain an exported video which may include the first video stream 21, the second video stream 22, a synthetic video with the first video stream 21 and the second video stream 22 or a picture-in-picture video with the first video stream 21 and the second video stream 22.” And in [0090] “performing preset processing on the first video stream and the second video stream comprises: exporting the first video stream and/or the second video stream and the timeline information to obtain an exported video.” (emphasis added). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Wang, Deng, and Toussi in view of Zhou et al. (US 20240087608, filed 1/29/2021). Claim 21. The rejection of the electronic device of claim 13 is incorporated, wherein, in response to the triggering for the first draft resource identification, displaying the first draft resource on the playing interface, comprises: Wang does not explicitly disclose acquiring the resource files, the code files, and the configuration files corresponding to the first draft resource in response to the triggering for the first draft resource identification. However, Zhou, in an analogous art, teaches in [0031] “the user can perform various operation processing such as adding and editing on the to-be-processed video at the web front end, for example, adding, deleting or modifying (cutting and moving the position) on an audio or a video, adding, deleting or modifying of a sticker, adding, deleting or modifying of a character, a special effect or the like, to obtain the operation information… the operation information includes video acquisition information and video editing information, etc. Specifically, the video acquisition information includes a link address of the to-be-processed video, and the video editing information includes parameters about adding, deleting, modifying, etc. Various video processing parameters in a processing process are recorded into a draft. In an implementation, the draft is in an object notation (JavaScript Object Notation, JSON for short) string format.” (emphasis added) examiner note: the user editing operation on the video to-be-processed may be triggering event. The resource file may be the video to-be-processed, the code file may be the draft in form of JSON file, and the configuration may be adding, deleting, or modifying of a character, special effect or the like that a background server obtains as operation information. generating a video file corresponding to the first draft resource based on the resource files, the code files, and the configuration files; Zhou further teaches in [0039-0040] “after the draft is acquired, the server will perform processing on the to-be-processed video according to the draft, and perform video synthesis on the to-be-processed video subject to the processing to obtain a video file… the server will generate a link address of the video file and send the link address to the web front end… the web front end receives the link address returned by the server which is an address of the video file obtained after the video synthesis, receives a download request of a user for the link address, and downloads the video file according to the download request.” (emphasis added) examiner note: the server performs (generates) processing on the video to-be-processed, utilizing the operation information, to obtain a video file, loading the video file to a memory; and creating a player instance to play the first draft resource on the playing interface based on the video file by the player instance. Zhou further teaches in [0041] “after the video file subject to the video synthesis is obtained, the server will generate the corresponding link address, and send the link address to the web front end. After the link address is received, the web front end will display it at the web front end. When a user clicks the link address for downloading, the user will get the corresponding synthesized video.” (emphasis added) examiner note: the web front end (client side) receives link to the server video generated file and the user download (load the video file to a memory) by clicking on the link to display the edited video file as a draft video file that the user may share it with other users if desired to. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang with the teaching of Zhou “In order to improve a user's experience of watching a video, video editing software is usually used to perform multiple kinds of edit processing on the video, such as adding an audio, an image, a special effect, and synthesis processing is performed on the video before uploading the video, so that the effect of edit processing on the video can be reproduced when playing.” Zhou [0003]. Response to Arguments Applicant’s arguments with respect to claims 1, 13, and 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Argument: “Applicant respectfully asserts that the cited reference Wang and Deng do not disclose or suggest the claimed invention, specifically in light of amended claims 1, 13, and 14… the cited reference Wang at least fails to disclose or suggest “switching the first draft resource to a second draft resource on the playing interface in response to a first preset triggering on the playing interface" and "the first draft resource and the second draft resource are video clips stored in a collection of drafts of a video application, and draft resources in the collection are drafts to be posted,” as recited in amended Claim l of the present application.” Response: Wang teaches in [0035 and 0048] “It should be understood that not only may the audio file obtained during video recording be synthesized with the corresponding video file, but also other audio files and the captured video file may be synthesized and edited, etc. For example, other audio files include audio streams obtained by audio recording equipment, other audio files stored locally or audio files obtained from the network, etc. Therefore, more flexible choices may be further provided for video presentation and editing in the later stage, so that users can implement various editing ideas… after video recording is completed, the obtained first video stream 21 and/or second video stream 22 may be exported or shared to obtain an exported video which may include the first video stream 21, the second video stream 22, a synthetic video with the first video stream 21 and the second video stream 22 or a picture-in-picture video with the first video stream 21 and the second video stream 22.” It would be clear that during video recording video clips may be recorded (stored locally). Then, at later stage, the video clips obtained from the captured video clips (drafts) may be edited by a user by selecting desired video file as generated final video draft to be shared or exported. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHAMED I NAZAR whose telephone number is (571)270-3174. The examiner can normally be reached 10 am to 7 pm Mon-Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at 571-272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHAMED I NAZAR/Examiner, Art Unit 2178 3/23/2026 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Sep 03, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12564342
METHODS, SYSTEMS, AND DEVICES FOR THE DIAGNOSIS OF BEHAVIORAL DISORDERS, DEVELOPMENTAL DELAYS, AND NEUROLOGIC IMPAIRMENTS
2y 5m to grant Granted Mar 03, 2026
Patent 12548333
DYNAMIC NETWORK QUANTIZATION FOR EFFICIENT VIDEO INFERENCE
2y 5m to grant Granted Feb 10, 2026
Patent 12549503
INFORMATION INTERACTION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12539042
Multi-Modal Imaging System and Method Therefor
2y 5m to grant Granted Feb 03, 2026
Patent 12541546
LOSSLESS SUMMARIZATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
88%
With Interview (+35.1%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 378 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month