Prosecution Insights
Last updated: April 19, 2026
Application No. 18/001,174

METHOD AND SYSTEM FOR SELECTING HIGHLIGHT SEGMENTS

Final Rejection §102
Filed
Dec 08, 2022
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Dropbox Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Applicant’s Remarks filed 10/28/2025 have been received and considered. The 112(b) rejections and the interpretations under 112(f) cited in the non-final office action mailed 04/30/2025 are hereby withdrawn. Claims 1 – 22 have been amended Claims 1 – 22, all of the claims pending in this application, have been rejected. Response to Applicant’s Remarks In view of the Applicant’s remarks filed 10/28/2025, regarding amendments to independent claims 1 and 16, along with their respective dependent claims, the previously applied prior art rejections are withdrawn. Applicant's remarks are rendered moot in view of the new grounds of rejection set forth below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 – 22 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Publication No. 2016/0292510 A1 to Han et al. (hereinafter Han). Claim 1 Regarding Claim 1, an independent method claim, Han teaches A computer-implemented method for selecting a highlight segment, the method comprising: [[R]]receiving [[A]]a sequence of frames[[,]] and data associated with a user ("A client device 110 is an electronic device used by a user to perform functions such as recording a sports video, consuming digital content, executing software applications, browsing websites hosted by web servers on the network 120, downloading files, and the like. For example, the client device 110 may be a smart phone, or a tablet, notebook, or desktop computer. The client device 110 includes and/or interfaces with a display device on which the user may view videos and other content. In addition, the client device 110 provides a user interface (UI), such as physical and/or on-screen buttons, with which the user may interact with the client device 110 to perform functions such as viewing, selecting, and consuming digital content such as video highlights of sports videos. In one embodiment, the client device 110 has a highlight detection module 112 (e.g., 112A for client device 110A and 112B for client device 110B) for detecting video highlights in real time in a sports video received by the client device 110 using video highlight models trained by the video highlight model training service 130.", Paragraph [0019]), where this data (interactions with the user client device) is data associated with the user, i.e. user preferences. each frame of the sequence of frames, [[said]]each local neighborhood comprising at least one frame from the sequence of frames ("The feature training module 310 classifies the sports videos stored in the video database 132 into different classes and generates feature vectors associated with each class of the sports videos.", Paragraph [0029], where the local neighborhoods are the videos (sequence of frames) categorized into their respective class; [[and]] [[C]]converting each local neighborhood into a feature vector (Rejected as applied directly above); generating, utilizing a machine learning model, a score for[[to]] eachcomparing each feature vector associated with a local neighborhood with the user data by determining a distance between each feature vector associated with a local neighborhood and a reference feature vector associated with the user data ("A video highlight of a sports video is a portion of the sports video and represents a semantically important event captured in the sports video, e.g., a short video clip capturing goals or goal attempts in a soccer game video clip. To detect video highlights in a video frame of the input video, the highlight detection module 430 applies the highlight detection model trained by the training module 136 to the feature vector associated with the video frame. In one embodiment, the highlight detection module 430 compares the feature vector with pair-wise frame feature vectors to determine the similarity between the feature vector associated with the video frame and the feature vector of the pair-wise frame feature vectors representing a video highlight. For example, the highlight detection module 430 computes a Euclidean distance between the feature vector associated with the video frame and the pair-wise feature vector representing a video highlight. Based on the comparison, the highlight detection module 430 computes a highlight score for the video frame.", Paragraph [0043], where the video highlight is data derived from user input; "The highlight detection module 430 repeats the similar detection process to each video frame of the input video and generates a highlight score for each video frame of the input video. A larger highlight score of a video frame indicates a higher likelihood that the video frame has a video highlight than another video frame having a smaller highlight score.", Paragraph [0044]); comprising a plurality of frames from the sequence of frames based on evaluating each score for each ("FIG. 7A shows a video frame of a mountain biking sports video captured by a mobile phone. FIG. 7B is an exemplary graphical user interface to present video highlight scores associated with the video frames of the mountain biking sports video illustrated in FIG. 7A. FIG. 7C is an exemplary user interface 740 showing a video frame 750 of a mountain biking sports video captured by a mobile phone, its associated video highlight scores 760 and an interaction tool 770 for users to interact with the presented video highlight scores according to one embodiment.", Paragraph [0045]…"The example shown in FIG. 7B further shows a graph of highlight scores for 6 identified videos frames, i.e., 30.sup.th, 60.sup.th, 90.sup.th, 120.sup.th, 150.sup.th and 180.sup.th frame, of the input video, where the 60.sup.th frame has the highest highlight score 730 and the video segment between the 30.sup.th frame and 60.sup.th frame is likely to represent a video highlight of the input video. The video segment between the 30.sup.th frame and 60.sup.th frame is presented as a video highlight predicted by the highlight detection module 430 to the users of the client device.; and to the user for display (Figure 7C; "The interface module presents the highlight scores for all the video frames of the sports video in a graphical user interface, e.g., the interface as illustrated in FIG. 7B. Users of the mobile device may interact with the presentation of the highlight scores, e.g., minor adjustment of the location of a video highlight based on his real time viewing of the sports video.", Paragraph [0051]), where Figure 7C shows the highlight segment being displayed on the client's device. PNG media_image1.png 681 501 media_image1.png Greyscale Claim 2 Regarding Claim 2, dependent on claim 1, Han teaches the invention as claimed in claim 1. Han further teaches further comprising generating and maintaining a database of video segments and selecting at least one video segment as the user data based on at least one characteristic associated with the user ("A client device 110 is an electronic device used by a user to perform functions such as recording a sports video, consuming digital content, executing software applications, browsing websites hosted by web servers on the network 120, downloading files, and the like. For example, the client device 110 may be a smart phone, or a tablet, notebook, or desktop computer. The client device 110 includes and/or interfaces with a display device on which the user may view videos and other content. In addition, the client device 110 provides a user interface (UI), such as physical and/or on-screen buttons, with which the user may interact with the client device 110 to perform functions such as viewing, selecting, and consuming digital content such as video highlights of sports videos.", Paragraph [0019]; "The video highlight model training service 130 illustrated in the embodiment of FIG. 1 includes a video database 132, a model database 134, a training module 136 and a highlight model update module 138. Other embodiments of the video highlight model training service 130 can have additional and/or different modules. The video database 132 stores a large video corpus of sports videos of various types, e.g., American football, soccer, table tennis/ping pong, tennis and basketball.", Paragraph [0020]), where this characteristic of the user, in this case, is the user's love of sports. Claim 3 Regarding Claim 3, dependent on claim 1, Han teaches the invention as claimed in claim 1. Han further teaches wherein receiving the user data comprises of the user and converting it into the user data, [[and]] wherein converting the at least one reference video segment comprises converting the at least one reference video segment into the[[a]] reference feature vector ("Users of the client device can interact with the highlight scores of the input video presented in the graphical user interface and the update module 450 detects user interactions with the presented highlight scores of the input video. For example, a user of the client device may drag a pointer pointing to the video highlight predicted by the highlight detection module 430 to a different location on the interface based on what the user is viewing on the client device in real time. The adjustment of the location of the video highlight based on the user real time viewing of the input video is detected by the update module 450. The update module 450 retrieves the frame feature vectors associated with the adjusted video highlight from the frame buffer 402 and provides the retrieved frame feature vectors to the video highlight model training service 130. The highlight model update module 138 of the training service 130 dynamically updates the detection model trained by the training module 136 based on the frame feature vectors associated with the adjusted video highlight received from the update module 450.", Paragraph [0047]). Claim 4 Regarding Claim 4, dependent on claim 3, Han teaches the invention as claimed in claim 3. Han further teaches wherein the user data comprises a plurality of reference feature vectors obtained by converting a plurality of reference video segments indicative of a user[['s]] preference of the user ("The highlight model update module 138 dynamically updates the feature vectors, the feature model and the video highlight detection model based on real time video highlight detection of sports videos received by the client device 110. In one embodiment, the highlight model update module 138 dynamically updates the feature vectors and the feature model based on the features vectors of the sports videos received by the client device 110. Responsive to user interacting with video highlights detected by the highlight detection module 112 of the client device 110, the highlight model update module 138 dynamically updates the highlight detection model based on the user interaction with the video highlights of the sports videos. The highlight model update module 138 is further described with reference to the description of the highlight detection module 112 of FIG. 4.", Paragraph [0022]). Claim 5 Regarding Claim 5, dependent on claim 4, Han teaches the invention as claimed in claim 4. Han further teaches wherein the plurality of reference video segments are indicative of different user preferences of the user and wherein the plurality of reference video segments are grouped into sets, each [[said]] set indicative of a particular user preference, and wherein each set is converted into a distinct user data subset comprising a subset of the plurality of reference feature vectors associated with the plurality of reference video segments forming part of it (Rejected as applied to claim 1), where the sports segments chosen by the user are classified into categories and feature vectors are generated for each category. Claim 6 Regarding Claim 6, dependent on claim 5, Han teaches the invention as claimed in claim 5. Han further teaches wherein [[the]] each feature vector[[s]] is[[are]] assigned a score based on each distinct user data subset, and wherein the method further comprises, for each feature vector, assigning a score based on a comparison to eachdistinct user data subset[[s]] ("A video highlight of a sports video is a portion of the sports video and represents a semantically important event captured in the sports video, e.g., a short video clip capturing goals or goal attempts in a soccer game video clip. To detect video highlights in a video frame of the input video, the highlight detection module 430 applies the highlight detection model trained by the training module 136 to the feature vector associated with the video frame. In one embodiment, the highlight detection module 430 compares the feature vector with pair-wise frame feature vectors to determine the similarity between the feature vector associated with the video frame and the feature vector of the pair-wise frame feature vectors representing a video highlight. For example, the highlight detection module 430 computes a Euclidean distance between the feature vector associated with the video frame and the pair-wise feature vector representing a video highlight. Based on the comparison, the highlight detection module 430 computes a highlight score for the video frame. The highlight score for the video frame represents a response of a neuron at the last layer of the fully connected layers of the convolutional neural network, which is used to train the feature model and the highlight detection model by the training module 136.", Paragraph [0043]). Claim 7 Regarding Claim 7, dependent on claim 5, Han teaches the invention as claimed in claim 5. Han further teaches further comprising assigning a weight to each distinct user data subset, said weight associated with the use("The interface module presents the highlight scores for all the video frames of the sports video in a graphical user interface, e.g., the interface as illustrated in FIG. 7B. Users of the mobile device may interact with the presentation of the highlight scores, e.g., minor adjustment of the location of a video highlight based on his real time viewing of the sports video. The highlight detection module 112 provides the real time highlight detection data (not shown in FIG. 5) to the training module 136, which dynamically updates the feature model and the highlight detection model based on the real time highlight detection data.", Paragraph [0051]). Claim 8 Regarding Claim 8, dependent on claim 1, Han teaches the invention as claimed in claim 1. Han further teaches further comprising, prior to selecting the local neighborhood for each frame,(Figure 7C). Claim 9 Regarding Claim 9, dependent on claim 8, Han teaches the invention as claimed in claim 8. Han further teaches wherein each local neighborhood is comprised within a single segment (Rejected as applied to claim 8). Claim 10 Regarding Claim 10, dependent on claim 1, Han teaches the invention as claimed in claim 1. Han further teaches wherein generating the score[[s]] for each associated with a local neighborhood with each reference feature vector of a plurality of[[the] reference feature vectors associated with the user data (Rejected as applied to claim 1); and assigning scores to each local neighborhood[[s]] based on a difference with respect to closest matching of each feature vector and the plurality of reference feature vectors. (Rejected as applied to claim 1). Claim 11 Regarding Claim 11, dependent on claim 5, Han teaches the invention as claimed in claim 5. Han further teaches wherein generating the score[[s]] for each distinct user data subset is closest to each feature vector and assigning it a value based on a comparison between the subset of the plurality of reference feature vectors and each [[said]] feature vector (Paragraphs [0033 - 0038; 0043]. Claim 12 Regarding Claim 12, dependent on claim 11, Han teaches the invention as claimed in claim 11. Han further teaches further comprising accounting for a distinct user data subset when generating a score[[s]] for each (Rejected as applied to claims 6 and 7), where the weights equate to the interest/preference of the user, emphasizing which clips/highlights are viewed more often compared to others stored and then the input from scored highlights displayed to the user are then used to update feature vectors of other frames. Claim 13 Regarding Claim 13, dependent on claim 1, Han teaches the invention as claimed in claim 1. Han further teaches further comprising at least one local neighborhood (Figure 7C). Claim 14 Regarding Claim 14, dependent on claim 13, Han teaches the invention as claimed in claim 13. Han further teaches wherein constructing the highlight segment comprisesfor each feature vector[[s]] corresponding to each[[the]] frame[[s]] and their neighboring frames and identifying a plurality of neighboring frames with an average bes(Figure 7C). Claim 15 Regarding Claim 15, dependent on claim 13, Han teaches the invention as claimed in claim 13. Han further teaches further comprising corresponding to a plurality of distinct neighboring frames with an average highes(Figure 7C; Paragraph [0046]). Claim 16, an independent system claim, is rejected for the same reasons as applied to claim 1. Claims 17 – 22 are rejected for the same reasons as applied to the above claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Publication No. 2017/0330040 A1 to Chakraborty et al. (hereinafter Chakraborty) US Patent No. 9578279 B1 to Mysore Vijaya Kumar et al. (hereinafter Mysore Vijaya Kumar) US Publication No. 2017/0024614 A1 to Sanil et al. (hereinafter Sanil) US Publication No. 2016/0014482 A1 to Chen et al. (hereinafter Chen) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Dec 08, 2022
Application Filed
Apr 24, 2025
Non-Final Rejection — §102
Oct 15, 2025
Interview Requested
Oct 22, 2025
Examiner Interview Summary
Oct 28, 2025
Response Filed
Feb 10, 2026
Final Rejection — §102
Mar 27, 2026
Interview Requested
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month