Prosecution Insights
Last updated: April 19, 2026
Application No. 18/590,702

INTERACTION METHOD, SYSTEM, AND ELECTRONIC DEVICE

Non-Final OA §103
Filed
Feb 28, 2024
Examiner
HUERTA, ALEXANDER Q
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Alibaba Singapore Holding Private Limited
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
80%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
351 granted / 520 resolved
+9.5% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
16 currently pending
Career history
536
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 10, 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-3, 5-15, 17-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6-8, 13-15, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kilar et al. (US Pub. 2013/0004138) in view of Fink et al. (US Pub. 2009/0300475) and in further view of Tobin et al. (US Pub. 2018/0234717), herein referenced as Kilar, Fink, and Tobin, respectively. Regarding claim 1, Kilar discloses “A method implemented by a computing device, the method comprising: playing an audio/video…([0050], [0063], Figs. 4-6, i.e., graphical user interface 400 that may be used to play a video file); displaying an interactive control reflecting a playback progress in a playback interface of the audio/video ([0050], [0063]-[0065], [0074]-[0075], Figs. 4-6, i.e., generate a secondary window 420 presenting in response to receiving user input indicative of a user interest in adding a comment); determining, in response to an interactive operation triggered by a user … interaction information and a playback progress when the interactive operation is triggered ([0063]-[0066], [0074]-[0075], Figs. 4-6, i.e., client device enables a user to enter comments into a comment space 418 while a media player component of the user interface plays a video, file or afterwards); and sending the user's interaction data for a segment of the plurality of segments of the audio/video to a server terminal based on the interaction information and the playback progress, to allow the server to obtain interaction data triggered by different users for different segments of the audio/video.” ([0006], [0098]-[0100], Figs. 1-2, 12, i.e., transmitting the user comment data correlated to identifiers for the audio-video content and the temporal point to a computer server). Kilar fails to explicitly disclose determining, in response to an interactive operation triggered by a user through the interactive control, interaction information and a playback progress when the interactive operation is triggered. Fink teaches the technique of determining, in response to an interactive operation triggered by a user through the interactive control, interaction information and a playback progress when the interactive operation is triggered ([0030], [0041], Figs. 3, 5, i.e., annotation editing controls 507 allow the addition of annotations to the video in a manner similar to that of controls 302-305). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of determining, in response to an interactive operation triggered by a user through the interactive control, interaction information and a playback progress when the interactive operation is triggered as taught by Fink, to improve the user annotation system of Kilar for the predictable result of providing the user the convenience of quickly and directly accessing annotation editing controls. The combination still fails to disclose the audio/video being partitioned into a plurality of segments…obtaining buffered data corresponding to the audio/video in a buffer; determining multiple buffered segments of a buffering progress bar and respective display attributes of the multiple buffered segments based at least in part on the buffered data, wherein the respective display attributes of the multiple buffered segments are related to corresponding user interaction popularities of the multiple buffered segments; displaying the multiple buffered segments on the buffering progress bar in the playback interface, wherein displaying the multiple buffered segments comprises displaying the multiple buffered segments in different colors or different color depths of a color according to the corresponding user interaction popularities of the multiple buffered segments. Tobin teaches the technique of providing disclose the audio/video being partitioned into a plurality of segments ([0007]-[0009], i.e., time interval segments)…obtaining buffered data corresponding to the audio/video in a buffer ([0040], [0049], [0066], Fig. 1, i.e., content can be on demand or streamed and thus is buffered); determining multiple buffered segments of a buffering progress bar and respective display attributes of the multiple buffered segments based at least in part on the buffered data, wherein the respective display attributes of the multiple buffered segments are related to corresponding user interaction popularities of the multiple buffered segments; displaying the multiple buffered segments on the buffering progress bar in the playback interface, wherein displaying the multiple buffered segments comprises displaying the multiple buffered segments in different colors or different color depths of a color according to the corresponding user interaction popularities of the multiple buffered segments ([0021]-[0022], [0060]-[0066], Figs. 5-6, i.e., “hot watch” spots shown are in different colors on the scrubber bar. The “hot watch” spots indicate areas of particular interest to viewers of the content item. The “hot watch” spots may be determined based on a variety of signals, such as editorial input, social media feedback input, voting input, usage input, and other analytical input with respect to the content item). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing the audio/video being partitioned into a plurality of segments…obtaining buffered data corresponding to the audio/video in a buffer; determining multiple buffered segments of a buffering progress bar and respective display attributes of the multiple buffered segments based at least in part on the buffered data, wherein the respective display attributes of the multiple buffered segments are related to corresponding user interaction popularities of the multiple buffered segments; displaying the multiple buffered segments on the buffering progress bar in the playback interface, wherein displaying the multiple buffered segments comprises displaying the multiple buffered segments in different colors or different color depths of a color according to the corresponding user interaction popularities of the multiple buffered segments as taught by Tobin, to improve the user annotation system of Kilar for the predictable result of making it easier to search for the key parts of a content item using the scrub bar ([0005]). Regarding claim 2, Kilar discloses “displaying a progress bar and a progress indicator that moves on the progress bar for reflecting the playback progress in the playback interface of the audio/video; and displaying the interactive control to be linked with the progress indicator.” ([0063]-[0066], [0074], Figs. 4-6, i.e., a progress bar 412 may include a progress indicator 414 that automatically progresses during playing of a video file to indicate the current location temporal points (locations) of play and possible locations to leave a comment. In addition, a secondary widow 420 to appear proximate to the progress bar 414 near the progress indicator 414 to indicate the place at which the comment would be placed). Regarding claim 3, Kilar discloses “wherein displaying the interactive control to be linked to the progress indicator comprises: displaying the interactive control around the progress indicator; obtaining a moving speed and a moving direction of the progress indicator; and controlling the interactive control to move according to the moving speed and the moving direction.” ([0063]-[0066], [0074]-[0075], Figs. 4-6, i.e., secondary widow 420 to appear proximate to the progress bar 414 near the progress indicator 414 to indicate the place at which the comment would be placed. An indicator 420 may be moved from one place to another along a progress bar 416, possibly locating the comment at a new location 410. The secondary window 420 may further include a time comparator 422, indicating a current location of the proposed comment relative to the entire length of the video file. Such a comparator 422 may change as the user changes (e.g., selects and drags) the location of a comment along a progress bar 416). Regarding claim 6, Kilar discloses “sending the interaction information and the playback progress to the server terminal as the interaction data, to allow the server terminal to determine a target segment of the audio/video based on the playback progress.” ([0056], [0098]-[0100], Figs. 1-3, 12, i.e., generating linking data that includes time data defining or identifying a time point or partial portion of a video file 322 (e.g., start time, end time, or some combination thereof)). Regarding claim 7, Kilar discloses “determining a target segment of the audio/video based on the playback progress, and sending the interaction information and a segment identifier of the target segment to the server terminal as the interaction data.” ([0006], [0098]-[0100], Figs. 1-2, 12, i.e., transmitting the user comment data correlated to identifiers for the audio-video content and the temporal point to a computer server). Regarding claim 8, Kilar fails to explicitly disclose “displaying an animation effect corresponding to the interactive operation on the playback interface of the audio/video in response to the interactive operation triggered by the user through the interactive control.” Fink teaches the technique of displaying an animation effect corresponding to the interactive operation on the playback interface of the audio/video in response to the interactive operation triggered by the user through the interactive control ([0029], Figs. 3, 5, i.e., various types of annotations can be used to modify standard linear video viewing, such as animated video annotations). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of displaying an animation effect corresponding to the interactive operation on the playback interface of the audio/video in response to the interactive operation triggered by the user through the interactive control as taught by Fink, to improve the user annotation system of Kilar for the predictable result of altering the appearance and/or behavior of an existing video by supplementing it with animated video annotations thus enhancing the viewing experience. Regarding claim 13, Kilar discloses “One or more computer readable media storing executable instructions that, when executed by one or more processors (Figs. 1-3), cause the one or more processors to perform acts comprising: playing an audio/video… ([0050], [0063], Figs. 4-6, i.e., graphical user interface 400 that may be used to play a video file); displaying an interactive control reflecting a playback progress in a playback interface of the audio/video ([0050], [0063]-[0065], [0074]-[0075], Figs. 4-6, i.e., generate a secondary window 420 presenting in response to receiving user input indicative of a user interest in adding a comment); determining, in response to an interactive operation triggered by a user… interaction information and a playback progress when the interactive operation is triggered ([0063]-[0066], [0074]-[0075], Figs. 4-6, i.e., client device enables a user to enter comments into a comment space 418 while a media player component of the user interface plays a video, file or afterwards); and sending the user's interaction data for a segment of the plurality of segments of the audio/video to a server terminal based on the interaction information and the playback progress, to allow the server to obtain interaction data triggered by different users for different segments of the audio/video.” ([0006], [0098]-[0100], Figs. 1-2, 12, i.e., transmitting the user comment data correlated to identifiers for the audio-video content and the temporal point to a computer server). Kilar fails to explicitly disclose determining, in response to an interactive operation triggered by a user through the interactive control, interaction information and a playback progress when the interactive operation is triggered. Fink teaches the technique of determining, in response to an interactive operation triggered by a user through the interactive control, interaction information and a playback progress when the interactive operation is triggered ([0030], [0041], Figs. 3, 5, i.e., annotation editing controls 507 allow the addition of annotations to the video in a manner similar to that of controls 302-305). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of determining, in response to an interactive operation triggered by a user through the interactive control, interaction information and a playback progress when the interactive operation is triggered as taught by Fink, to improve the user annotation system of Kilar for the predictable result of providing the user the convenience of quickly and directly accessing annotation editing controls. The combination still fails to disclose the audio/video being partitioned into a plurality of segments…obtaining buffered data corresponding to the audio/video in a buffer; determining multiple buffered segments of a buffering progress bar and respective display attributes of the multiple buffered segments based at least in part on the buffered data, wherein the respective display attributes of the multiple buffered segments are related to corresponding user interaction popularities of the multiple buffered segments; displaying the multiple buffered segments on the buffering progress bar in the playback interface, wherein displaying the multiple buffered segments comprises displaying the multiple buffered segments in different colors or different color depths of a color according to the corresponding user interaction popularities of the multiple buffered segments. Tobin teaches the technique of providing disclose the audio/video being partitioned into a plurality of segments ([0007]-[0009], i.e., time interval segments) … obtaining buffered data corresponding to the audio/video in a buffer ([0040], [0049], [0066], Fig. 1, i.e., content can be on demand or streamed and thus is buffered); determining multiple buffered segments of a buffering progress bar and respective display attributes of the multiple buffered segments based at least in part on the buffered data, wherein the respective display attributes of the multiple buffered segments are related to corresponding user interaction popularities of the multiple buffered segments; displaying the multiple buffered segments on the buffering progress bar in the playback interface, wherein displaying the multiple buffered segments comprises displaying the multiple buffered segments in different colors or different color depths of a color according to the corresponding user interaction popularities of the multiple buffered segments ([0021]-[0022], [0060]-[0066], Figs. 5-6, i.e., “hot watch” spots shown are in different colors on the scrubber bar. The “hot watch” spots indicate areas of particular interest to viewers of the content item. The “hot watch” spots may be determined based on a variety of signals, such as editorial input, social media feedback input, voting input, usage input, and other analytical input with respect to the content item). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing the audio/video being partitioned into a plurality of segments…obtaining buffered data corresponding to the audio/video in a buffer; determining multiple buffered segments of a buffering progress bar and respective display attributes of the multiple buffered segments based at least in part on the buffered data, wherein the respective display attributes of the multiple buffered segments are related to corresponding user interaction popularities of the multiple buffered segments; displaying the multiple buffered segments on the buffering progress bar in the playback interface, wherein displaying the multiple buffered segments comprises displaying the multiple buffered segments in different colors or different color depths of a color according to the corresponding user interaction popularities of the multiple buffered segments as taught by Tobin, to improve the user annotation system of Kilar for the predictable result of making it easier to search for the key parts of a content item using the scrub bar ([0005]). Regarding claim 14, claim 14 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 2. Regarding claim 15, claim 15 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 3. Regarding claim 18, claim 18 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 6. Regarding claim 19, claim 19 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 7. Regarding claim 20, claim 20 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 8. Claims 5, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kilar in view of Fink, Tobin, and in further view of Kuznetsov (US Pub. 2014/0044407), herein referenced as Kuznetsov. Regarding claim 5, the combination fails to disclose “wherein determining the multiple buffered segments of the buffering progress bar and the respective display attributes of the multiple buffered segments based on the buffered data comprises: determining a length of a buffered segment corresponding to at least one segment to be played based on a data amount of the at least one segment to be played included in the buffered data; and determining a display attribute of the buffered segment based on information included in the buffered data that represents a user interaction popularity corresponding to the at least one segment to be played.” Kuznetsov teaches the technique of providing wherein determining the multiple buffered segments of the buffering progress bar and the respective display attributes of the multiple buffered segments based on the buffered data comprises: determining a length of a buffered segment corresponding to at least one segment to be played based on a data amount of the at least one segment to be played included in the buffered data ([0049]-[0050], i.e., video server 126 can load the video to the embedded player 134 from the start point to the end point. If the segment associated with the indicator has already been prefetched and loaded to the player's cache, then the segment is retrieved from the cache and played back immediately); and determining a display attribute of the buffered segment based on information included in the buffered data that represents a user interaction popularity corresponding to the at least one segment to be played ([0006], [0022]-[0026], [0048]-[0056], Figs. 2-4, i.e., displaying a popularity histogram alongside a time bar with timestamp indicators). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein determining the multiple buffered segments of the buffering progress bar and the respective display attributes of the multiple buffered segments based on the buffered data comprises: determining a length of a buffered segment corresponding to at least one segment to be played based on a data amount of the at least one segment to be played included in the buffered data; and determining a display attribute of the buffered segment based on information included in the buffered data that represents a user interaction popularity corresponding to the at least one segment to be played as taught by Kuznetsov, to improve the user annotation system of Kilar for the predictable result of providing viewers a visual indication of popular portions of video for convenient and quick browsing. Regarding claim 17, claim 17 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 5. Claims 9, 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kuznetsov in view of Tobin. Regarding claim 9, Kuznetsov discloses “An apparatus comprising: one or more processors; and memory storing executable instructions that ([0061]-[0062], Fig. 1), when executed by the one or more processors, cause the one or more processors to perform acts comprising: obtaining interaction data triggered and generated by different users for different segments of audio/video to obtain an interaction data set related to the audio/video…([0030]-[0032], [0037], [0049], [0053]-[0055], Figs. 2-5, i.e., obtaining user clicks and views to corresponding video segments); determining user interaction popularities corresponding to the different segments of the audio/video based on the interaction data set ([0030]-[0032], [0037], [0049], [0053]-[0055], Figs. 2-5, i.e., ranking segments based on click popularity or view popularity); and based on the user interaction popularities corresponding to the different segments of the audio/video and audio/video data of the audio/video generating streaming media data of the audio/video, to enable a device of a client terminal to display perceptible information reflecting the user interaction popularities corresponding to the different segments of the audio/video in a playback interface of the audio/video, based on the streaming media data that is downloaded.” ([0030]-[0032], [0037], [0049], [0053]-[0055], Figs. 2-5, i.e., presenting a histogram depicting the popularity of the segments as a function of the time represented by the time bar, wherein the segment is retrieved from the cache and played back immediately). Kuznetsov fails to disclose the audio/video being partitioned into a plurality of segments… wherein the different segments of the audio/video are displayed in different colors or different color depths of a color based on the user interaction popularities corresponding to the different segments of the audio/video. Tobin teaches the technique of providing audio/video being partitioned into a plurality of segments ([0007]-[0009], i.e., time interval segments)… wherein the different segments of the audio/video are displayed in different colors or different color depths of a color based on the user interaction popularities corresponding to the different segments of the audio/video ([0021]-[0022], [0060]-[0066], Figs. 5-6, i.e., “hot watch” spots shown are in different colors on the scrubber bar. The “hot watch” spots indicate areas of particular interest to viewers of the content item. The “hot watch” spots may be determined based on a variety of signals, such as editorial input, social media feedback input, voting input, usage input, and other analytical input with respect to the content item). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing the audio/video being partitioned into a plurality of segments… wherein the different segments of the audio/video are displayed in different colors or different color depths of a color based on the user interaction popularities corresponding to the different segments of the audio/video as taught by Tobin, to improve the video segmentation system of Kuznetsov for the predictable result of making it easier to search for the key parts of a content item using the scrub bar ([0005]). Regarding claim 11, Kuznetsov discloses “partitioning the audio/video into segments to obtain segment partitioning information of the audio/video; and adding segment partitioning information of the audio/video to the audio/video data of the audio/video.” ([0036]-[0039], [0049]-[0050], Figs. 2-5, i.e., segment identifying module 112 to identifies the segments of video indicated by the timestamps. When a user clicks the timestamp indicator the URL can pass from the embedded player 134 to the video server 126 to request the segment of video from the video database 128). Regarding claim 12, Kuznetsov discloses “wherein partitioning the audio/video into the segments to obtain the segment partitioning information of the audio/video comprises one of: partitioning the audio/video into multiple segments with equal playback duration according to equal intervals, and using the duration as the segment partitioning information; or performing a scene segmentation on the audio/video to obtain segments corresponding to multiple sets of scene segmentation sequences based on audio/video content of the audio/video, and using a start frame and an end frame of each segment as the segment partitioning information.” ([0006], [0042], [0046]-[0047], Figs. 2-5, i.e., segment identifying module 112 forms 320 a set by processing video with one or more scene change algorithms and forming sets based on scene changes. A video can be processed to determine one or more scenes in the video and the timestamps clustered into sets corresponding to one or more of the scenes). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kuznetsov in view of Tobin and in further view of Petrov (US Pat. 10,747,948), herein referenced as Petrov. Regarding claim 10, Kuznetsov discloses “receiving interaction information … and determining a target segment for which the interaction information is directed based on the playback progress.” ([0006], [0031]-[0037], Figs. 2-5, i.e., receiving user comments and timestamp information associated with a particular moment or scene in a video). The combination fails to explicitly disclose receiving … a playback progress sent by the client terminal for the audio/video; and determining a target segment for which the interaction information is directed based on the playback progress. Petrov teaches the technique of receiving … a playback progress sent by the client terminal for the audio/video; and determining a target segment for which the interaction information is directed based on the playback progress (Col. 12 line 63-Col. 14 line 25, Col. 22 lines 46-60, Figs. 1-3, 7A-B, i.e., monitor module 206 detects the specific time portion during which the video content is playing. The monitor module 206 receives the specific time portion as time data. The monitoring module 206 transmits the information as annotation data to the annotation server module 152). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of receiving … a playback progress sent by the client terminal for the audio/video; and determining a target segment for which the interaction information is directed based on the playback progress as taught by Petrov, to improve the video segmentation system of Kuznetsov for the predictable result of targeting comments at specific portions and/or specific areas of video content (Col. 1 lines 19-29). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Q Huerta whose telephone number is (571)270-3582. The examiner can normally be reached M-F 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER Q HUERTA/Primary Examiner, Art Unit 2425 January 29, 2026
Read full office action

Prosecution Timeline

Feb 28, 2024
Application Filed
May 22, 2025
Non-Final Rejection — §103
Aug 20, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103
Dec 10, 2025
Request for Continued Examination
Dec 22, 2025
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604061
CLOSED CAPTIONING SUMMARIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12593088
METHODS AND APPARATUS TO DETERMINE MEDIA EXPOSURE OF A PANELIST
2y 5m to grant Granted Mar 31, 2026
Patent 12587717
FACILITATING VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12587694
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12563266
USER-BASED CONTENT FILTERING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
80%
With Interview (+12.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month