Prosecution Insights
Last updated: April 19, 2026
Application No. 18/929,405

VIDEO PROCESSING METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR PLAYING VIDEO IN AUDIO COVER DISPLAY REGION

Final Rejection §103
Filed
Oct 28, 2024
Examiner
KLICOS, NICHOLAS GEORGE
Art Unit
2118
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
4 (Final)
57%
Grant Probability
Moderate
5-6
OA Rounds
3y 6m
To Grant
87%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
205 granted / 361 resolved
+1.8% vs TC avg
Strong +30% interview lift
Without
With
+30.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
24 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 361 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is FINAL and is in response to the claims filed January 23, 2026. Claims 1, 2, 4-13, and 15-20 are currently pending, of which claims 1, 10, and 12 are currently amended. Claims 3 and 14 were previously cancelled. Response to Arguments Rejections Under 35 USC §112 Applicant has amended the claims at issue to now positively recite the controls and how they are grouped. Therefore, the previous rejections have been withdrawn. Prior Art Rejections Applicant’s arguments regarding the previously cited art have been fully considered and are not persuasive. Specifically, Applicant has amended the claims to recite various controls related to the media file being played and argues these features are not taught by the previously cited art. See Remarks 9. Merely tacking on incredibly common control features does not make the invention novel. Examiner further notes that no structure is given to the “timer” control as now claimed. Applicant’s disclosure discusses the speed control generally and the existence of a comment control, at an even higher level. The timer control is never explained. Therefore, it is subject to the broadest reasonable interpretation of the claim language, which in this case could be anything and everything related to a time or clock or timing in the media file. Nevertheless, Checkley has been cited to teach the timer controls (in addition to the other controls already taught by Willis). Willis explicitly discloses progress bars when playing back media. See Willis Fig. 3D. Checkley discloses a progress bar that can be displayed, where “a selectable indicator showing a current portion of the video being presented. The indicator can be dragged to a particular location along the scrubber/progress bar to select a particular portion of the video to navigate to and present.” paras. [0003] and [0036-37] Additionally, Applicant argues that Checkley’s full screen does not hide the three specific controls. See Remarks 9-10. Examiner respectfully disagrees. As discussed above, Willis and Checkley disclose the various controls. Checkley merely switches between a full-screen mode and a non-full screen mode that would explicitly and obviously hide the various controls of Willis and Checkley. See Willis Figs. 2B and 11, and paras. [0108], [0193], and [0197]. Checkley then further teaches the full-screen mode. Checkley explicitly teaches that controls can be exposed in the full screen mode in response to a tap, making obvious that the controls are otherwise not exposed. Therefore, it would be obvious to hide the controls of Willis in response to the full-screen mode of Checkley. Moreover, Checkley explicitly teaches navigation controls that could switch back to the non-full screen of Willis. See Checkley para. [0037]. It is for at least these reasons, and the reasons cited below, that the claims remain rejected in this Action Examiner’s Note The prior art rejections below cite particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 8-17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Willis et al. (U.S. Publication No. 2013/0031216; hereinafter, “Willis”), and further in view of Checkley et al. (U.S. Publication No. 2015/0370402; hereinafter, “Checkley”) and Kearby et al. (U.S. Patent 8,010,366 B1; hereinafter “Kearby”). As per claim 1, Willis teaches a video processing method, comprising: switching, in response to a preset trigger operation acting on a video playing page of a target video, to an audio playing page from the video playing page, wherein the audio playing page has an audio cover display region as a video playing window; playing an audio file of the target video on the audio playing page (See Willis Fig. 2B and para. [0108]: “allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.”); and playing, based on a playing progress of the audio file, the target video in the audio cover display region on the audio playing page (See Willis para. [0120]: currently playing track information “may comprise a title, artist name, image of the artist or album cover, icon, genre, or other information.”). However, while Willis returns to an image in the audio cover display region, Willis does not explicitly teach presenting a video picture in the target video in the audio cover display region. Checkley teaches that a video picture can be presented in place of the audio cover of Willis (See Checkley para. [0064]: thumbnail of video “can be generated from a frame associated with that second of video”. This would apply to the album cover region of Willis). Furthermore, while Willis teaches the method further comprising: entering, in response to a trigger operation on the audio cover display region, a video playing [pure] mode, and playing the target video (See Willis Fig. 2B and para. [0108]: “allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.”), Willis does not explicitly teach a video playing pure mode and that the target video is played based on the video playing pure mode. Willis also teaches function controls such as a speed playing manipulation control, a timer [control] and a comment control on the audio playing page when playing the target video (See Willis Figs. 2B and 11, and paras. [0108], [0193], and [0197]: interface 250 includes multiple control options, including the traditional play button, and an option to skip or rate the media. Furthermore, there is a comment/chat control button 254 associated with the media). Furthermore, while Willis teaches a progress bar with timestamps for the media playback (See Willis Fig. 3D), Willis does not explicitly teach that a user can control the timer in said playback. Checkley teaches that the timer in Willis can be a timer control (See Checkley paras. [0003] and [0036-37]: progress bar 306 can be displayed and “a selectable indicator showing a current portion of the video being presented. The indicator can be dragged to a particular location along the scrubber/progress bar to select a particular portion of the video to navigate to and present”). Additionally, Checkley teaches a video playing pure mode, as well as playing the target video based on the video playing pure mode, wherein the video playing pure mode is configured to: hide [a speed playing manipulation control, a timer control and a comment control on the audio playing page when playing the target video], and display a return control and an audio playing manipulation progress bar control on the video playing page (See Checkley para. [0037]: full screen mode to play videos, where in non-full screen mode, other controls can be exposed. This includes navigation controls that could switch back to the non-full screen of Willis. Therefore, in full screen mode, those controls are not exposed and thus hidden). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine, with a reasonable expectation of success, the video/audio pages and pictures of Willis with the video thumbnails and full screen mode of Checkley. One would have been motivated to combine these references because both references disclose media content playback with images associated with videos, and Checkley enhances the user experience by allowing for the user to easily and seamlessly view and track the progress of the videos of Willis, while also presenting an Moreover, while Willis/Checkley allows for updated playback speeds and frames related to the video (See Checkley Figs. 6A-6C and paras. [0057]), Willis/Checkley does not teach or suggest synchronizing a playing speed of the audio file with a playing speed of the video. Kearby teaches wherein a playing progress of the target video in the audio cover display region is synchronized with the playing progress of the audio file, and in response to an adjustment operation on a playing speed of the audio file, a playing speed of the target video in the audio cover display region is synchronously updated (See Kearby Fig. 4 and col. 7:47-60: “In accordance with the user's indication of a desired playback speed by use of slider 416, audio conditioning module 316 uses time compressor/decompressor 326 to adjust the rate of playback of the subject audiovisual content. Since, in video with sound, the video and sound portions are synchronized, audiovisual player 308 is capable of adjusting playback rates of the video portion to match the playback rate of the sound portion”. Therefore, the frames/thumbnails of Willis/Checkley are being updated as the content is being played, and will update in a synchronized fashion when the playback rate is adjusted using the UI slider of Kearby). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine, with a reasonable expectation of success, the video/audio playback and thumbnails/frames of Willis/Checkley with the playback speeds of Kearby. One would have been motivated to combine these references because both references disclose media content playback controls with a separate video playback region, and Kearby’s tools allow for “significantly enhance[d]…listening experience of audiovisual content” (See Kearby col. 8:43-46). As per claim 2, Willis/Checkley/Kearby teaches the method according to claim 1. However, while Willis updates the playing progress, as well as scrubber/progress bars (See Willis Fig. 3D and para. [0024]), Willis does not explicitly teach updating the video progress based on an adjustment to the audio file progress. Checkley further teaches updating, in response to an adjustment operation on the playing progress of the audio file, the target video played in the audio cover display region (See Checkley para. [0064]: thumbnail of video “can be generated from a frame associated with that second of video”. Therefore, as the audio progresses, a new frame from that particular second in the video would be displayed). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Willis with the teachings of Checkley for at least the same reasons as discussed above in claim 1. As per claim 3, Willis/Checkley/Kearby teaches the method according to claim 1. However, while Willis teaches entering, in response to a trigger operation on the audio cover display region, a video playing [pure] mode, and playing the target video (See Willis Fig. 2B and para. [0108]: “allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.”), Willis does not explicitly teach a video playing pure mode and that the target video is played based on the video playing pure mode. Checkley teaches a video playing pure mode, as well as playing the target video based on the video playing pure mode, wherein the video playing pure mode is used to hide displaying of a preset video playing functional control (See Checkley para. [0037]: full screen mode to play videos, where in non-full screen mode, other controls can be exposed. Therefore, in full screen mode, those controls are not exposed and thus hidden). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Willis with the teachings of Checkley for at least the same reasons as discussed above in claim 1. As per claim 4, Willis/Checkley/Kearby further teaches the method according to claim 1. Willis further teaches exiting, in response to a preset return operation acting on the video playing [pure] mode, the video playing pure mode, and returning to the audio playing page; playing, based on a playing progress of the target video in the video playing [pure] mode, the audio file of the target video on the audio playing page (See Willis paras. [0024] and [108]: “the system may dynamically deliver video data at a starting playback point responsive to a current playback of the audio” and “The graphical indication 260 may also allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.” Therefore, if the user is in the video mode, they can toggle to the audio mode and vice versa). However, while Willis teaches toggling between a video mode and an audio-only mode, Willis does not explicitly teach the video playing pure mode. Checkley teaches the video playing pure mode (See Checkley paras. [0032] and [0037]: full screen mode for video playback). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Willis with the teachings of Checkley for at least the same reasons as discussed above in claim 1. As per claim 5, Willis/Checkley/Kearby further teaches the method according to claim 1, further comprising: determining, in response to a preset switching operation acting on the audio playing page, a target switching audio file based on a video information stream corresponding to the target video, wherein the target switching audio file is an audio file of a target switching video corresponding to the target video; playing the target switching audio file on the audio playing page (See Willis Fig. 2B and para. [0108]: “allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.” Therefore, the user can go from video to audio and/or audio to video). As per claim 6, Willis/Checkley/Kearby further teaches the method according to claim 1, further comprising: switching, in response to a preset return operation acting on the audio playing page, back to the video playing page from the audio playing page, and playing the target video on the video playing page based on the playing progress of the audio file (See Willis Fig. 2B and paras. [0024] and [0108]: “allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.” Therefore, the user can go from video to audio and/or audio to video, where “the system may dynamically deliver video data at a starting playback point responsive to a current playback of the audio”. This interaction with the toggle is a preset button, and thus, return operation). As per claim 8, Willis/Checkley/Kearby teaches the method according to claim 1. Willis further teaches wherein the switching, in response to a preset trigger operation acting on a video playing page of a target video, to an audio playing page from the video playing page, comprises: switching, [in response to a long press operation on an audio playing mode switching control] on the video playing page of the target video, to the audio playing page from the video playing page (See Willis paras. [0024] and [108]: “the system may dynamically deliver video data at a starting playback point responsive to a current playback of the audio” and “The graphical indication 260 may also allow the user to toggle between different formats for the media stream during playback. For example, as shown in FIG. 2B, a user can toggle between a song and the video for a song, which includes both audio and video.” Therefore, if the user is in the video mode, they can toggle to the audio mode and vice versa). However, while Willis teaches toggling between a video mode and an audio-only mode, Willis does not explicitly teach long-press inputs. Checkley teaches in response to a long press operation on an audio playing mode switching control (See Checkley Figs. 5A, 6A, and 7A and paras. [0044] and [0068]: press and hold gesture that changes from full screen video mode. This also notes the progress time of the playback as well. This gesture is being applied to the toggling between the audio and video streams of Willis). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Willis with the teachings of Checkley for at least the same reasons as discussed above in claim 1. Furthermore, introducing a variety of gestures allows for more complex and intuitive control on touch screen devices. As per claim 9, Willis/Checkley/Kearby further teaches the method according to claim 1, wherein the target video belongs to a first video collection, and the method further comprises: determining, in response to a preset collection video switching operation acting on the audio playing page, a target switching collection video based on the first video collection; playing an audio file of the target switching collection video on the audio playing page (See Willis Fig. 2B and paras. [0108] and [0116]: switching between media collections in what is essentially a playlist or station, such as the ability to identify the currently playing media item as well as other songs or items of media in the queue). As per claims 10 and 11, the claims are directed to a storage medium that implements the same features as the method of claims 1 and 2, respectively, and are therefore rejected for at least the same reasons therein. Furthermore, Willis/Checkley/Kearby teaches a non-transient computer-readable storage medium, comprising computer-executable instructions, wherein the computer-executable instructions, upon being executed by a computer processor, perform said method (See Willis paras. [0084] and [0255]). As per claims 12, 13, 15-17, 19, and 20, the claims are directed to a video processing device that implements the same features as the method of claims 1, 2, 4-6, 8, and 9, respectively, and are therefore rejected for at least the same reasons therein. Furthermore, Willis/Checkley/Kearby teaches a video processing device, comprising: a memory, a processor, and a computer program stored on the memory and executable by the processor, wherein the computer program upon being executed by the processor, causes the processor to implement said method (See Willis paras. [0093-94]). Claims 7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Willis/Checkley/Kearby as applied above, and further in view of Hinckley et al. (U.S. Publication No. 2011/0209100A1; hereinafter, “Hinckley”) As per claim 7, Willis/Checkley/Kearby further teaches the method according to claim 1, wherein the switching, in response to a preset trigger operation acting on a video playing page of a target video, to an audio playing page from the video playing page, comprises: playing, [in response to a multi-finger pinch operation acting on the video playing page of the target video], the target video currently displayed on the video playing page which is zoomed out along an operation direction of the multi-finger pinch operation, in the audio cover display region on the audio playing page (See Checkley paras. [0032] and [0037]: full screen mode for video playback. Figs. 7A-7C and paras. [0068-69]: gesture input to zoom out thumbnails and scroll thumbnails). However, while Willis/Checkley/Kearby teaches the video picture, Willis/Checkley/Kearby do not teach in response to a multi-finger pinch operation action on the video playing page. Hinckley teaches in response to a multi-finger pinch operation action, where the action would cause the zooming of the video of Willis/Checkley/Kearby (See Hinckley Fig. 3 and para. [0052]: pinch gesture can be used to condense a displayed object, such as the video playback of Willis/Checkley). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine, with a reasonable expectation of success, the video/audio pages and pictures of Willis/Checkley/Kearby with the pinch gesture of Hinckley. One would have been motivated to combine these references because both references disclose interacting with media content on a touch screen interface and Hinckley enhances the user experience of Willis/Checkley/Kearby by making interactions more intuitive and also “allow a user to easily and quickly interact with the many functions and features of a computing device” (See Hinckley para. [0001]). As per claim 18, the claims are directed to a video processing device that implements the same features as the method of claim 7, and is therefore rejected for at least the same reasons therein. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Klicos whose telephone number is (571)270-5889. The examiner can normally be reached Mon-Fri 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Baderman can be reached at (571) 272-3644. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS KLICOS/Primary Examiner, Art Unit 2118
Read full office action

Prosecution Timeline

Oct 28, 2024
Application Filed
Mar 07, 2025
Non-Final Rejection — §103
Jun 13, 2025
Response Filed
Jun 26, 2025
Final Rejection — §103
Sep 02, 2025
Response after Non-Final Action
Sep 30, 2025
Request for Continued Examination
Oct 09, 2025
Response after Non-Final Action
Oct 20, 2025
Non-Final Rejection — §103
Jan 23, 2026
Response Filed
Feb 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572212
GENERATING DEVICE IDENTIFIERS AND DEVICE CONTROLS BASED ON HAND GESTURES
2y 5m to grant Granted Mar 10, 2026
Patent 12564430
Computerized Process for Making a Patient-Specific Implant
2y 5m to grant Granted Mar 03, 2026
Patent 12563695
ELECTRONIC DEVICE AND HEAT DISSIPATION METHOD THEREFOR
2y 5m to grant Granted Feb 24, 2026
Patent 12508108
AXIAL DIRECTION AND DEPTH CHECKING GUIDE PLATE FOR IMPLANTING AND MANUFACTURE METHOD THEREOF
2y 5m to grant Granted Dec 30, 2025
Patent 12512697
CONTROL PROCESS FOR LOW VOLTAGE MICROGRIDS WITH DISTRIBUTED COMMUNICATION
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
57%
Grant Probability
87%
With Interview (+30.2%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 361 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month