Prosecution Insights
Last updated: April 19, 2026
Application No. 18/826,665

VIDEO PLAYBACK METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Final Rejection §103
Filed
Sep 06, 2024
Examiner
LIN, JASON K
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
221 granted / 454 resolved
-9.3% vs TC avg
Strong +35% interview lift
Without
With
+34.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is responsive to application No. 18/826,665 filed on 01/16/2026. Claim(s) 1-20 is/are pending and have been examined. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 6-9, 11, 13-17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN112801004) (Please see Chen_CN112801004A_Translation), in view of Li (US 2018/0075304), and further in view of Hong et al. (US 2022/0167055). Consider claims 1 and 20, Chen teaches a video playback method performed by a terminal device and a video playback apparatus, comprising a memory for storing instructions and at least one processor for executing the instructions (Abstract, Fig.13, Paragraph 0206-0209) to: present a video interface, the video interface being configured for displaying at least one piece of video content (Fig.1, Paragraph 0097 teaches an intelligent device capable of playing video, such as a mobile phone, tablet computer, etc. Paragraph 0100 teaches video player rendering on the display a current frame image of the video displayed on the video interface. Fig.3, Paragraph 0106 teaches video interface 10 of a video player includes a video display area 20, where the video display area 20 displays a current frame image), the video interface includes an operation management area configured for displaying at least one video clip (Paragraph 0136 teaches displaying each video clip in a preset area of the video interface. Paragraph 0137 teaches preset area of the video interface may be a page top area, a page middle area, and a page bottom area of the video interface); present at least one recognized recognition object in the operation management area of the video interface in response to an object recognition operation triggered for the at least one piece of video content, each recognition object corresponding to one video clip of the at least one video clip, each video clip of the at least one video clip being captured from the video content (Paragraph 0136 teaches displaying each video clip in a preset area of the video interface. Paragraph 0137 teaches preset area of the video interface may be a page top area, a page middle area, and a page bottom area of the video interface. Paragraph 0138 teaches after selection of at least one face, determining a target face from faces to be selected, and performing face analysis on the target face, screening at least one video clip comprising the target face from a video corresponding to the video interface, and then displaying each video clip in a preset area of the video interface. Paragraph 0140-0142 teaches alternatively displaying each of the video clips in a preset area of the video interface in a play list mode); and play the at least one video clip in the operation management area according to a set of playback rules (Paragraph 0138 teaches it should be noted that, when displaying each video segment, a continuous frame of each video segment may be displayed, that is, each video segment is displayed in a dynamic form, or a first frame of each video segment may be displayed, that is, each video segment is displayed in a static form. Paragraph 0141 teaches a alternatively display video clips in a playlist form where video interface further displays a play/pause control, where users can directly click the video clip for preview play). Chen does not explicitly teach the at least one piece of video content and the at least one video clip being displayed at the same time; video clip having environmental background information removed and the recognition object retained. In an analogous art, Li teaches video clip having environmental background information removed and a recognition object retained (Abstract, Paragraph 0007, 0019 teaches separating a background and a foreground of a video. Where the target object and its background in the video can be identified, and next the background can be removed. Taking the target out of an existing video by removing the background, creating a video clip with a transparent background). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen to include video clip having environmental background information removed and a recognition object retained, as taught by Li, for the advantage of providing a system and method for removing the background from a video in conventional technology (Li – Paragraph 0005), creating a video clip with a transparent background (Li – Paragraph 0019), further highlighting and bring greater attention to the desired object. Chen and Li do not explicitly teach the at least one piece of video content and the at least one video clip being displayed at the same time. In an analogous art, Hong teaches at least one piece of video content and the at least one video clip being displayed at the same time (Fig.3, Paragraph 0054, 0093). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen and Li to include at least one piece of video content and the at least one video clip being displayed at the same time, as taught by Hong, for the advantage of enabling the user to continue viewing the full video content, while being able to easily perceive thumbnail(s) pertaining to the video at a glance, allowing them to ascertain desired information, while still continuing to enjoy content. Consider claim 13, Chen teaches a video playback method performed by a server, the method (Abstract, Paragraph 0209) comprising: receiving an object recognition request triggered for at least one piece of video content; performing an object recognition on the at least one piece of video content (Paragraph 0099 teaches a user first touch operation on a current frame image of the video interface, and marking at least one face to be selected in the current image frame. Paragraph 0100 teaches face may be face of an actor. Paragraph 0101-0103 teaches responding to the first touch operation, performing face recognition on a current frame image of the video interface, and marking at least one face to be selected, wherein the at least one face to be selected can be marked in a face recognition frame mode. Paragraph 0105-0106 teaches selection of at least one face from multiple faces presented. Paragraph 0115 teaches when user watches a video, first touch operation for video interface is input, a face recognition mode is entered, where current frame image is subjected to face recognition. A second touch operation may select at least one face to be selected, to determine target face from at least one face to be selected. Paragraph 0138 teaches face analysis using a face recognition algorithm) acquiring recognition objects matching the object recognition request, the object recognition request being transmitted by a client in response to an object recognition operation triggered for the at least one piece of video content; capturing video clips with the corresponding recognition objects retained in the at least one piece of video content, each recognition object corresponding to one video clip; and transmitting the recognition objects and the corresponding video clips to the client to enable the client to present at least one recognized recognition object in an operation management area of a video interface (Paragraph 0136 teaches displaying each video clip in a preset area of the video interface. Paragraph 0137 teaches preset area of the video interface may be a page top area, a page middle area, and a page bottom area of the video interface. Paragraph 0138 teaches after selection of at least one face, determining a target face from faces to be selected, and performing face analysis on the target face, screening at least one video clip comprising the target face from a video corresponding to the video interface, and then displaying each video clip in a preset area of the video interface), and playing at least one video clip in the operation management area according to a preset playback rule (Paragraph 0138 teaches it should be noted that, when displaying each video segment, a continuous frame of each video segment may be displayed, that is, each video segment is displayed in a dynamic form, or a first frame of each video segment may be displayed, that is, each video segment is displayed in a static form. Paragraph 0141 teaches alternatively display video clips in a playlist form where video interface further displays a play/pause control, where users can directly click the video clip for preview play), the video interface being configured for displaying the at least one piece of video content (Fig.1, Paragraph 0097 teaches an intelligent device capable of playing video, such as a mobile phone, tablet computer, etc. Paragraph 0100 teaches video player rendering on the display a current frame image of the video displayed on the video interface. Fig.3, Paragraph 0106 teaches video interface 10 of a video player includes a video display area 20, where the video display area 20 displays a current frame image). Chen does not explicitly teach capturing video clips with environmental background information removed and corresponding recognition objects retained; and the video interface being configured for displaying the at least one piece of video content and the at least one video clip in the same time. In an analogous art, Li teaches capturing video clips with environmental background information removed and corresponding recognition objects retained (Abstract, Paragraph 0007, 0019 teaches separating a background and a foreground of a video. Where the target object and its background in the video can be identified, and next the background can be removed. Taking the target out of an existing video by removing the background, creating a video clip with a transparent background). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen to include capturing video clips with environmental background information removed and corresponding recognition objects retained, as taught by Li, for the advantage of providing a system and method for removing the background from a video in conventional technology (Li – Paragraph 0005), creating a video clip with a transparent background (Li – Paragraph 0019), further highlighting and bring greater attention to the desired object. Chen and Li do not explicitly teach the video interface being configured for displaying the at least one piece of video content and the at least one video clip in the same time. In an analogous art, Hong teaches video interface being configured for displaying at least one piece of video content and at least one video clip in the same time (Fig.3, Paragraph 0054, 0093). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen and Li to include video interface being configured for displaying at least one piece of video content and at least one video clip in the same time, as taught by Hong, for the advantage of enabling the user to continue viewing the full video content, while being able to easily perceive thumbnail(s) pertaining to the video at a glance, allowing them to ascertain desired information, while still continuing to enjoy content. Consider claim 2, Chen, Li, and Hong teach wherein presenting the at least one recognized recognition object in the operation management area of the video interface in response to the object recognition operation triggered for the at least one piece of video content comprises: presenting, in the operation management area in response to the object recognition operation triggered for a first video content currently played on the video interface, at least one recognition object recognized from a current playback picture of the first video content (Chen - Paragraph 0136 teaches displaying each video clip in a preset area of the video interface. Paragraph 0137 teaches preset area of the video interface may be a page top area, a page middle area, and a page bottom area of the video interface. Paragraph 0138 teaches after selection of at least one face, determining a target face from faces to be selected, and performing face analysis on the target face, screening at least one video clip comprising the target face from a video corresponding to the video interface, and then displaying each video clip in a preset area of the video interface. Paragraph 0140-0142 teaches alternatively displaying each of the video clips in a preset area of the video interface in a play list mode. Paragraph 0174 teaches performing face identification on a current frame image of a video interface in response to first touch operation, and marking at least one face to be selected in the current frame image); or presenting, in the operation management area in response to the object recognition operation triggered for the video interface, at least one recognition object recognized from a second video content, each piece of the second video content being in a content library associated with the video interface and meeting an automatic recognition rule. Consider claim 3, Chen, Li, and Hong teach wherein the presenting, in the operation management area in response to the object recognition operation triggered for the first video content currently played on the video interface, the at least one recognition object recognized from a current playback picture of the first video content (Chen - Paragraph 0136 teaches displaying each video clip in a preset area of the video interface. Paragraph 0137 teaches preset area of the video interface may be a page top area, a page middle area, and a page bottom area of the video interface. Paragraph 0138 teaches after selection of at least one face, determining a target face from faces to be selected, and performing face analysis on the target face, screening at least one video clip comprising the target face from a video corresponding to the video interface, and then displaying each video clip in a preset area of the video interface. Paragraph 0140-0142 teaches alternatively displaying each of the video clips in a preset area of the video interface in a play list mode. Paragraph 0174 teaches performing face identification on a current frame image of a video interface in response to first touch operation, and marking at least one face to be selected in the current frame image) comprises: performing the object recognition operation in response to a preset operation performed on a target object in the current playback picture, and presenting the recognized recognition object in the operation management area; or performing the object recognition operation in response to a triggering operation on a picture recognition control at a related position of the first video content, and presenting the at least one recognition object in the current playback picture in the operation management area (Chen - Paragraph 0010-0011, 0101-0103, 0112, 0115; Paragraph 0136, 0138, 0140-0142). Consider claim 4, Chen, Li, and Hong teach wherein before the presenting, in the operation management area in response the object recognition operation triggered for the video interface, the at least one recognition object recognized from a second video content, the method further comprises: presenting prompt information of a recognition result of the object recognition operation in the video interface through an incompletely expanded operation management area; and presenting, in the operation management area, the at least one recognition object recognized from the second video content comprises: presenting, in response to an expansion operation on the operation management area in an expanded operation management area, the at least one recognition object recognized from the second video content (Chen - Paragraph 0106-0111, 0115-0118). Consider claim 6, Chen, Li, and Hong teach wherein before the presenting, in the operation management area in response to the object recognition operation triggered for the video interface, the at least one recognition object recognized from a second video content, the method further comprises: presenting a rule setting interface; and acquiring, in response to an input operation on the rule setting interface, an automatic recognition rule inputted through the rule setting interface (Chen - Paragraph 0106-0111, 0115-0118). Consider claim 7, Chen, Li, and Hong teach wherein the method further comprises: presenting at least one playback setting option in response to a playback setting operation triggered for the operation management area; and playing, in the operation management area in response to a selection operation on a target option in the at least one playback setting option based on a playback rule corresponding to the target option, a video clip of the at least one video clip matching the playback rule (Chen – Paragraph 0141; Fig.7, Paragraph 0144-0148; Paragraph 0164-0172). Consider claim 8, Chen, Li, and Hong teach wherein the method further comprises: performing, in response to a first specified operation triggered for at least one recognition object in the operation management area, a corresponding processing logic on a video clip corresponding to the at least one recognized recognition object, the first specified operation being at least one of a playback control operation or a content processing operation on the video clip (Chen – Paragraph 0112, 0141; Fig.7, Paragraph 0144-0148). Consider claim 9, Chen, Li, and Hong teach wherein the performing, in response to the first specified operation triggered for the at least one recognition object in the operation management area, the corresponding processing logic on the video clip corresponding to the at least one recognized recognition object comprises: presenting, in response to management operations triggered for recognition objects in the operation management area, at least one first operation control on each of the recognition objects; and performing, in response to the first specified operation triggered for any first operation control in the at least one first operation control, the corresponding processing logic on a video clip of a recognition object corresponding to the first operation control (Chen – Paragraph 0112, 0141; Fig.7, Paragraph 0144-0148). Consider claim 11, Chen, Li, and Hong teach wherein the method further comprises: presenting a collection interface in response to a collection viewing operation triggered based on the operation management area, the collection interface displaying at least one collected recognition object and a corresponding video clip; and performing, in response to a second specified operation triggered for any collected recognition object in the at least one collected recognition object in the collection interface, a corresponding processing logic on a video clip corresponding to the collected recognition object, the second specified operation being at least one of a playback control operation or a content processing operation on a video clip (Chen – Paragraph 0112, 0141; Fig.7, Paragraph 0144-0148). Consider claim 14, Chen, Li, and Hong teach wherein after the acquiring and before the capturing, the method further comprises: transmitting confirmation information for recognized recognition objects to the client to enable the client to display identifier information of the recognized recognition objects in the operation management area (Chen – Paragraph 0022-0023, 0055-0056, 0130-0132). Consider claim 15, Chen, Li, and Hong teach wherein the capturing comprises: in response to receiving a confirmation request returned by the client for one recognition object, capturing video clips with the environmental background information removed and the recognition object retained in the video content, the confirmation request being transmitted by the client in response to a confirmation operation on the recognition object (Chen – Paragraph 0209; Paragraph 0107-0109, 0111, 0117-0118, 0138; Li - Abstract, Paragraph 0007, 0019). Consider claim 16, Chen, Li, and Hong teach wherein if the object recognition request comprises an automatic recognition rule uploaded by the client, the performing and acquiring comprise: selecting, based on the automatic recognition rule from a content library associated with the video interface, video content meeting the automatic recognition rule; and performing an object recognition on the selected video content, and acquiring a recognition object matching the automatic recognition rule (Chen - Paragraph 0106-0111, 0115-0118; Paragraph 0121-0124). Consider claim 17, Chen, Li, and Hong teach wherein transmitting the recognition objects and the corresponding video clips to the client comprises: transmitting the recognition objects and the corresponding video clips to the client according to a specified order, the specified order being associated with the automatic recognition rule (Chen – Paragraph 0209; Paragraph 0112, 0141-0142, 0148). Consider claim 19, Chen, Li, and Hong teach a video playback apparatus, comprising a memory for storing instructions and at least one processor for executing the instructions to perform the steps of claim 13 (Chen - Fig.13, Paragraph 0206-0209). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN112801004) (Please see Chen_CN112801004A_Translation), in view of Li (US 2018/0075304), in view of Hong et al. (US 2022/0167055), and further in view of Gerhard et al. (US 2007/0126741). Consider claim 5, Chen, Li, and Hong teach wherein before presenting, in the operation management area in response to the object recognition operation triggered for the first video content currently played on the video interface, the at least one recognition object recognized from the current playback picture of the first video content, the method further comprises: pausing playback of the first video content in the video interface, and presenting identifier information of the recognized recognition objects in the current playback picture; and presenting, in response to confirmation operations on the recognition objects in the recognized at least one recognition object, the recognition objects moving to the operation management area, and displaying the recognition objects in the operation management area (Chen - Paragraph 0106-0111, 0115-0118, 0124). Chen, Li, and Hong do not explicitly teach animation effects of the objects moving to area. In an analogous art, animation effects of objects moving to area (Paragraph 0037). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen, Li, and Hong to include animation effects of objects moving to area, as taught by Gerhard, for the advantage of providing added visual flair and indications to better draw user’s attention, to highlighted objects of importance. Claim(s) 10 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN112801004) (Please see Chen_CN112801004A_Translation), in view of Li (US 2018/0075304), in view of Hong et al. (US 2022/0167055), and further in view of Cassidy et al. (US 2012/0296739). Consider claim 10, Chen, Li, and Hong teach video clip of a recognition object; the video clip corresponding to the recognition object (Chen – Paragraph 0136-0138), but do not explicitly teach wherein if the first specified operation is a program return operation in the content processing operation, performing, in response to the first specified operation triggered for any first operation control in the at least one first operation control, the corresponding processing logic on the clip of an object corresponding to the first operation control comprises: jumping from the operation management area to the video interface based on the program return operation; and continuing to play video content to which the clip corresponding to the object corresponding to the first operation control belongs in the video interface. In an analogous art, Cassidy teaches wherein if the specified operation is a program return operation in the content processing operation, performing, in response to a first specified operation triggered for any first operation control in at least one first operation control, corresponding processing logic on a clip of an object corresponding to the first operation control comprises: jumping from an operation management area to a video interface based on the program return operation; and continuing to play video content to which the clip corresponding to the object corresponding to the first operation control belongs in the video interface (Fig.2, Paragraph 0041). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen, Li, and Hong to include wherein if the specified operation is a program return operation in the content processing operation, performing, in response to a first specified operation triggered for any first operation control in at least one first operation control, corresponding processing logic on a clip of an object corresponding to the first operation control comprises: jumping from an operation management area to a video interface based on the program return operation; and continuing to play video content to which the clip corresponding to the object corresponding to the first operation control belongs in the video interface, as taught by Cassidy, for the advantage of providing convenient access to desired locations within the content, enabling user(s) to jump to specific sections of content, quickly and effectively, allowing them to continue to fully consume content. Consider claim 12, Chen, Li, and Hong teach wherein the collection interface further comprises a second operation control related to collected recognition objects (Chen – Paragraph 0112, 0141; Fig.7, Paragraph 0144-0148); and performing, in response to the second specified operation triggered for any collected recognition object in the at least one collected recognition object in the collection interface, the corresponding processing logic on the video clip corresponding to the collected recognition object comprises: the collected recognition object, the video clip corresponding to the collected recognition object (Chen – Paragraph 0136-0138; Paragraph 0112, 0141; Fig.7, Paragraph 0144-0148). Chen, Li, and Hong do not explicitly teach the second specified operation being a program return operation; and the performing, in response to a second specified operation triggered for any object in the at least one object in the collection interface, a corresponding processing logic on a clip corresponding to the object comprise: jumping from the collection interface to a video interface in response to the program return operation triggered for the second operation control related to the object, and continuing to play video content to which the clip corresponding to the object belongs in the video interface. In an analogous art, Cassidy teaches a second specified operation being a program return operation; and performing, in response to a second specified operation triggered for any object in at least one object in a collection interface, a corresponding processing logic on a clip corresponding to the object comprise: jumping from the collection interface to a video interface in response to the program return operation triggered for the second operation control related to the object, and continuing to play video content to which the clip corresponding to the object belongs in the video interface (Fig.2, Paragraph 0041). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen, Li, and Hong to include a second specified operation being a program return operation; and performing, in response to a second specified operation triggered for any object in at least one object in a collection interface, a corresponding processing logic on a clip corresponding to the object comprise: jumping from the collection interface to a video interface in response to the program return operation triggered for the second operation control related to the object, and continuing to play video content to which the clip corresponding to the object belongs in the video interface, as taught by Cassidy, for the advantage of providing convenient access to desired locations within the content, enabling user(s) to jump to specific sections of content, quickly and effectively, allowing them to continue to fully consume content. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN112801004) (Please see Chen_CN112801004A_Translation), in view of Li (US 2018/0075304), in view of Hong et al. (US 2022/0167055), in view of Cassidy et al. (US 2012/0296739), and further in view of Liu (CN110225369) (Please see Liu_CN110225369B_Translation). Consider claim 18, Chen, Li, and Hong teach a specified video clip (Chen – Paragraph 0136-0138). Chen and Li do not explicitly teach where the method further includes: receiving a program return request transmitted by the client for a specified clip; recognizing a playback position associated with the specified clip based on historical request data of the specified clip; and feeding back the playback position to the client to enable the client to continue to play video content to which the specified clip belongs in the video interface based on the playback position. In an analogous art, Cassidy teaches where method further includes: receiving a program return request transmitted by a client for a specified clip; recognizing a playback position associated with the specified clip; and enable the client to continue to play video content to which the specified clip belongs in the video interface based on the playback position (Fig.2, Paragraph 0041; Paragraph 0034-0035). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen, Li, and Hong to include where method further includes: receiving a program return request transmitted by a client for a specified clip; recognizing a playback position associated with the specified clip; and enable the client to continue to play video content to which the specified clip belongs in the video interface based on the playback position, as taught by Cassidy, for the advantage of providing convenient access to desired locations within the content, enabling user(s) to jump to specific sections of content, quickly and effectively, allowing them to continue to fully consume content. Chen, Li, Hong, and Cassidy do not explicitly teach recognizing a playback position associated with the specified clip based on historical request data of the specified clip; and feeding back the playback position to the client to enable the client to continue to play video content. In an analogous art, Liu teaches recognizing a playback position associated with the specified clip based on historical request data of the specified clip; and feeding back the playback position to the client to enable the client to continue to play video content (Paragraph 0072-0077). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Chen, Li, Hong, and Cassidy to include recognizing a playback position associated with the specified clip based on historical request data of the specified clip; and feeding back the playback position to the client to enable the client to continue to play video content, as taught by Liu, for the advantage of enabling the system to accurately determine the exact point of desired clips, allowing the system to play out and consume the exact point in the full content. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON K LIN whose telephone number is (571)270-1446. The examiner can normally be reached on Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON K LIN/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Sep 24, 2025
Non-Final Rejection — §103
Oct 28, 2025
Applicant Interview (Telephonic)
Oct 28, 2025
Examiner Interview Summary
Jan 16, 2026
Response Filed
Mar 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604047
JUST IN TIME CONTENT CONDITIONING
2y 5m to grant Granted Apr 14, 2026
Patent 12593082
JUST IN TIME CONTENT CONDITIONING
2y 5m to grant Granted Mar 31, 2026
Patent 12556760
CREDITING EXPOSURE TO MEDIA IDENTIFIED USING SOURCE FILTERING
2y 5m to grant Granted Feb 17, 2026
Patent 12548455
GROUND-BASED CONTENT CURATION PLATFORM DISTRIBUTING GEOGRAPHICALLY-RELEVANT CONTENT TO AIRCRAFT INFLIGHT ENTERTAINMENT SYSTEMS
2y 5m to grant Granted Feb 10, 2026
Patent 12537993
SMART HOME AUTOMATION USING MULTI-MODAL CONTEXTUAL INFORMATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
84%
With Interview (+34.8%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month