Prosecution Insights
Last updated: April 19, 2026
Application No. 17/373,238

OPTIMIZING CONTINUOUS MEDIA COLLECTION

Non-Final OA §103
Filed
Jul 12, 2021
Examiner
DANG, HUNG Q
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Whp Workflow Solutions Inc.
OA Round
9 (Non-Final)
68%
Grant Probability
Favorable
9-10
OA Rounds
3y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
1257 granted / 1841 resolved
+10.3% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
95 currently pending
Career history
1936
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1841 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06/02/2025 has been entered. Response to Arguments Applicant's arguments filed 06/02/2025 have been fully considered but they are not persuasive. On page 12, Applicant argues that, “ […] In the above cited paragraphs, Spence, at best, merely discloses determining or identifying the one or more times of interest based on the analysis of the sensor data, recorded video image data, audio data, or received manual inputs. Spence, however, does not disclose or suggest that the analysis includes a comparison of a predetermined threshold and a ranking value calculated as a sum of weighted values assigned to corresponding event types. Thus, Spence fails to disclose or suggest “in response to a ranking value calculated for the at least one event being greater than a predetermined threshold, selecting the portion of the first media data to be prioritized, wherein the ranking value is a sum of weighted values assigned to corresponding event types,” as recited in amended claim 1. (Emphases added). ” (original emphasis) In response, Examiner respectfully submits that Spencer, at least in [0145]-[0146], teaches: [0145] As discussed above, a plurality of clusters are generated based on a determination, for each of the identified extrema, if the time of an extremum is within a predetermined time of the time of another of the extrema. Each cluster has a start time and an end time, which together define a duration for the cluster. The start time for the cluster corresponds to the earliest of the times of the extrema in the cluster, while the end time corresponds to the latest of the times of the extrema in the cluster. Each cluster preferably further comprises a cluster score, wherein the cluster score is preferably based on the score of the individual identified extrema in the cluster. For example, the cluster score can be a mean, optionally a weighted mean, of the scores of the individual scores. The use of a weighted mean allows the different individual datasets to have a different impact on the cluster score. [0146] In embodiments, two or more clusters can be combined to create a single cluster, e.g. if the two or more clusters are within a predetermined time of each other, such as 2 seconds. For example, a first cluster can be combined with a second cluster if the end time of the first cluster is within a predetermined time of the start time of the second cluster. The resultant single cluster will preferably have a set of properties based on the properties of the clusters that were combined to create it. For example, if a first cluster is combined with a second cluster, the first cluster being earlier in time than the second cluster, then the resultant cluster will have the start time of the first cluster and the end time of the second cluster. The score of the resultant cluster will preferably be based on the score of the clusters that are combined, e.g. as a mean of the clusters of the combined clusters. (emphasis added) As shown in emphasized text, the score taught by Spence and described in the Office Action is the score of a cluster, calculated as a weighted mean of individual scores. The individual scores are scores of either individual extrema or of another cluster. Since any of extrema or a cluster is an event, a score of an individual extrema or a cluster is a score of an event. Next, because (1) a weighted mean is defined as the summation of the product of weights and quantities, divided by the summation of weights and (2) the cluster score is a weighted mean of individual scores, we can write a formula for the cluster score as ∑ 1 n w i S c o r e i ∑ 1 n w i , wherein n is the number of individual events, scorei is an individual score of event i, and wi is a weight assigned to the individual score i. Let’s represent ∑ 1 n w i as W, then the score of the cluster is ∑ 1 n w i S c o r e i W = w 1 S c o r e 1 + w 2 S c o r e 2 + … + w n S c o r e n W = w 1 W Score1 + w 2 W Score2+…+ w n W Scoren. (I) Now, for the sake of argument, let’s consider two cases: Each of the event in the formula has a different type, thus each of w 1 W , w 2 W ,…, and w n W in the formula (I) is an individual weight itself. As such, the score of the cluster expressed in formula (I) is a sum of weighted values assigned to corresponding event types. Assuming among the events in (I), there are events of a same type. We can rearrange the events according to types, e.g. event type 1: event 1→ event i, (wherein i is the number of events of event type 1), event type 2: events (i+1) → event (i+1+j) (wherein j is the number of events of type 2), and event type 3: event (i+2+j) → event (i+2+j+h), (wherein h is the number of event of type 3), etc., then score assigned to each event type is given as follows: - score assigned to event type 1 = w 1 W Score1 + w 2 W Score2+…+ w i W Scorei (II) - score assigned to event type 2 = w ( i + 1 ) W Score(i+1) + w ( i + 2 ) W Score(i+2)+…+ w ( i + 1 + j ) W Score(i+1+j) (III) - score assigned to event type 3 = w ( i + 2 + j ) W Score(i+2+j) + w ( i + 3 + j ) W Score(i+3+j)+…+ w ( i + 2 + j + h ) W Score(i+2+j+h) (IV) -… - score assigned to the last event type is obtained in a similar manner. It is easy to see that each expression in (II), (III), and (IV), etc., is a weighted value assigned to a corresponding event types. As such, their sum, which is the score of the cluster (corresponding to the recited ‘ranking value’) is a sum of weighted values assigned to corresponding event types. Further, at least in [0147], Spence states, [0147] At least some or all of the clusters, either an original cluster or resulting from a combination of clusters, are used to create highlights identifying time periods of interest, e.g. to the user, in the video image data. As will be appreciated, each cluster is preferably used to create an individual highlight, which is then preferably stored, as discussed above, in a metadata portion of the digital media file. In embodiments, each of the clusters are ranked (or sorted) based on their cluster scores, and only some of the clusters are used in the creation of highlights. For example, only a predetermined number of clusters may be used to create highlights, e.g. due to the need to reserve memory such that the one or more metadata portions can be located at the start of the media file. The predetermined number of clusters can be a fixed number, e.g. only 10 automatic highlights are created, or can be a variable number, e.g. based on the number of manual highlights, such that only a maximum number of highlights are created and added to a media file. Additionally, or alternatively, only those clusters with a cluster score above a predetermined value are preferably used in the creation of highlights. (emphasis added) Clearly, Spence teaches the score of the cluster, which corresponds to the ranking value recited in the claim is compared to a threshold to select important portion of the media data for creating the highlight. As such, Spence clearly teaches the limitation of “in response to a ranking value calculated for the at least one event being greater than a predetermined threshold, selecting the portion of the first media data to be prioritized, wherein the ranking value is a sum of weighted values assigned to corresponding event types.” On pages 13-14, Applicant argues that, The Examiner further alleges that Han at ¶¶ [0064] and [0078] discloses “applying a first retention policy to the second media data and applying a second retention policy to the first media data that is different from the first retention policy,” as recited in previously presented claim 1. Office Action, pp. 2-5. Applicant respectfully disagrees. In the above cited paragraphs, Han discloses that “[t]he circular buffer may also use an intelligent FIFO buffer, in which unmarked video data is deleted before marked video data (as the marked video data is more likely to present important information)” and “[v]ideo data that is marked as important may retained at full size while other video data is transcoded to a smaller size. This retains potentially important video data while allowing less important data to be reduced in size and/or deleted.” (Emphases added). Based on the above cited paragraphs, the Examiner alleges that Han discloses that the unmarked video data is either (i) transcoded to a smaller size and deleted or (ii) deleted before marked data, which is seemingly analogized to a second retention policy. Thus, the Office appears to implicitly analogize Han’s unmarked (other) video data to the claimed first media data. Based on the above cited paragraphs, the Office further alleges that Han discloses that the marked video data is either (iii) retained at full size or (iv) deleted after unmarked video data, which is seemingly analogized to a first retention policy. Thus, the Examiner appears to implicitly analogize Han’s marked video data to the claimed second media data. Applicant respectfully submits that the analogies provided by the Office are not applicable to amended claim 1, and therefore do not support the rejection. Han at best, merely discloses to divide video data into two types of video data, the marked video data and the unmarked (other) video data. However, Han’s marked video data is not a duplicated sub-portion of the unmarked video data and Han’s unmarked video is not the entirety of the video data (including the marked video data). Thus, Han does not disclose or suggest “applying a first retention policy to the second media data duplicated from the first media data” and "applying a second retention policy to the entirety of the first media data, which includes the second media data,” as required in amended claim 1. While Spence discloses at ¶ [0461] that “[h]ighlights’ are clips of video image data derived from individual tags,” this does not cure the deficiencies of Han. (original emphasis) In response, Examiner respectfully disagrees and submits that, at least in [0069]-[0070], Han teaches that the tagged video data, which corresponds to the recited second media data, is extracted from the original video data, which corresponds to the first media data, and stored as a new and shorter recording. As such, Han teaches “prioritizing second media data by applying a first retention policy to the second media data duplicated from first media data and applying a second retention policy to the entirety of the first media data, which includes the second media data, that is different from the first retention policy.” On page 15, Applicant argues that, “Without acceding to the Office’s characterization of Mahlmeister and Agarwal, Applicant submits that Mahlmeister and Agarwal do not overcome the above-described deficiencies of Spence and Han. In view of the above, none of the cited references constitute or suggest the claimed subject matter of “in response to a ranking value calculated for the at least one event being greater than a predetermined threshold, selecting the portion of the first media data to be prioritized, wherein the ranking value is a sum of weighted values assigned to corresponding event types” and “applying a first retention policy to the second media data duplicated from the first media data and applying a second retention policy to the entirety of the first media data, which includes the second media data, that is different from the first retention policy.” (emphases added). Applicant respectfully submits that the cited references, either alone or in combination, fail to disclose or suggest at least the above subject matter as recited in amended claim 1. For at least these reasons, no prima facie case of obviousness has been established, and the amended independent claim 1 should be allowable over the cited references.” (original emphasis) In response, Examiner respectfully submits that these arguments are moot in view of the discussion of Spence and Han above. In conclusion, Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7-9, 11-13, 16-18, and 21-26 are rejected under 35 U.S.C. 103 as being unpatentable over Spence et al. (US 2020/0066305 A1 – hereinafter Spence), Mahlmeister et al. (US 2020/0147496 A1 – hereinafter Mahlmeister), and Han et al. (US 2017/0230605 A1 – hereinafter Han). Regarding claim 1, Spence discloses a method comprising: receiving, from a media collection device, media information that includes a first media data, trigger data, and sensor data ([0030]; [0034]; [0086]-[0088] – receiving first media data from an image sensor, receiving trigger data from user, e.g. a user manual input command, and sensor data indicative moments of interest); detecting, based on a data pattern detected within the sensor data, at least one event associated with the first media data, the data pattern comprising at least one of one or more movements or one or more orientations of the media collection device ([0124]; [0142]; [0456]-[0461]; Figs. 14-17), and the at least one event comprising an action to be associated with an operator of the media collection device ([0179]; [0249]; [0252]); determining that a portion of the first media data less than the entirety of the first media data corresponds to a predetermined media data pattern ([0120] – analyzing the recorded audio data to identify times when a particular sound is heard or a particular word or phrase is spoken); determining, based upon determining that the at least one event warrants prioritization and based upon the portion of the first media data corresponding to the predetermined media data pattern, that the portion of the first media data is to be prioritized ([0094]; [0120]; [0122]-[0125] – determining, based on the sensor data, a portion of interest of the first media is to be prioritized, e.g. identified as a portion of interest to be used in a highlighted video file and based upon the portion of media being a particular spoken word or phrase to detect the times of media to be prioritized); in response to a ranking value calculated for the at least one event being greater than a predetermined threshold, selecting the portion of the first media data to be prioritized, wherein the ranking value is a sum of weighted values assigned to corresponding event types ([0145]-[0147] – see “Response to Arguments” above); identifying, based on one or more of the trigger data, a beginning time and ending time for a second media data to be prioritized that includes the portion of the first media data, the beginning time corresponds to a first time of an activation of a trigger mechanism of the media collection device via a switch on the media collection device and the ending time corresponding to a second time of a deactivation of the trigger mechanism of the media collection device via the switch on the media collection device ([0123]; [0452]; [0461] - a trigger mechanism via a switch on the camera during recording to place a tag for creating the manual highlight data - the beginning time corresponding to a first time of an activation of the trigger mechanism of the media collection device via the switch on the media collection device, which is 5 seconds preceding the time of the tag, which is the time when the user actuates a button on the camera to place the tag, and the ending time corresponding to a second time of a deactivation of the trigger mechanism of the media collection device via the switch on the media collection device, which is 5 seconds after the time of the tag, which is the time when the user actuates a button on the camera to place the tag); and generating the second media data from the first media data based on the beginning time and the ending time, the second media data being a duplicate of the portion of but less than the entirety of the first media data ([0499]-[0503] – generating a highlighted portion, which is the second media data from the first media data, the second media data being duplicated from the original first media data as further described at least in [0094]-[0095]). However, Spence does not disclose comparing the first media data to a predetermined media data pattern; based on comparing the first media data to the predetermined media data pattern, determining that a similarity between the portion of the first media data less than the entirety of the first media data and the predetermined media data pattern satisfies a threshold degree of similarity; and the determining is based upon the similarity between the portion of the first media data and the predetermined media data pattern satisfying the threshold degree of similarity; and prioritizing the generated second media data by applying a first retention policy to the second media data duplicated from the first media data and applying a second retention policy to the entirety of the first media data, which includes the second media data, that is different from the first retention policy. Mahlmeister discloses determining that a portion of the first media data corresponds to a predetermined media data pattern as determining that a similarity between the portion of the first media data and the predetermined media data pattern satisfying a threshold degree of similarity, comprising comparing the first media data to the predetermined media data pattern ([0110] – comprising detected sound to a predetermined media data using audio profiles); based on comparing the first media data to the predetermined media data pattern, determining that a similarity between a portion of the first media data and the predetermined media data pattern satisfies a threshold degree of similarity ([0110] – based on the comprising, determining the detected sound matches a stored profile based on a threshold of similarity). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Mahlmeister into determining that a portion of the first media data corresponding to a predetermined media data pattern in the method taught by Spence to facilitate implementation of the determining because spectral profile of sound can be conveniently computed mathematically using existing tools, i.e. FFT etc. Spence and Mahlmeister do not disclose prioritizing the generated second media data by applying a first retention policy to the second media data duplicated from the first media data and applying a second retention policy to the entirety of the first media data, which includes the second media data, that is different from the first retention policy. Han discloses prioritizing generated second media data by applying a first retention policy to the second media data duplicated from first media data and applying a second retention policy to the entirety of the first media data, which includes the second media data, that is different from the first retention policy ([0064]; [0070]; [0074]-[0078] – a second retention policy includes instructions to delete first media data as follows: (i) transcoded into smaller size and deleted (ii) deleted before marked data, the first retention policy includes instructions to delete second media data as follows: (iii) retained (at full size) or (iv) deleted after unmarked video data – also see “Response to Arguments” above). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Han into the method taught by Spence and Mahlmeister to retain the highlighted or important portions of media data for long-term storage according to a desired policy while deleting the unimportant media data to free up the storage for new captured media data to keep continuous operation of the method. Regarding claim 2, Spence also discloses the media collection device comprises a camera mounted on an operator of the media collection device ([0086]; [0338]; Fig. 37 – a camera worn by a user of a media collection device), and a media processing device is configured to receive the media information from the media collection device and determine that the portion of the first media data is to be prioritized when the media collection device does not initiate recording via the activation of the trigger mechanism ([0094]; [0120]; [0122]-[0125] – receiving sensor data and determining, based on the sensor data, a portion of interest of the first media is to be prioritized, e.g. identified as a portion of interest to be used in a highlighted video file and based upon the portion of media being a particular spoken word or phrase to detect the times of media to be prioritized). Regarding claim 3, Spence also discloses the trigger data comprises an indication that the trigger mechanism has been activated by an operator of the media collection device ([0117]-[0119]; [0123] – a user manual input, e.g. a user activating a button, touching or gesture inputs). Regarding claim 4, Spence also discloses the sensor data comprises data obtained from at least one of a gyroscope, accelerometer, or magnetometer captured by the media collection device ([0086] – at least an accelerometer, preferably a 3-axis accelerometer; a gyroscope; or a magnetometer). Regarding claim 7, Spence also discloses associating at least a portion of the trigger data or the sensor data to the second media data ([0473] – associating a series of traces showing the variation in certain sensor data, e.g. speed, acceleration, elevation, heart rate, etc., for each highlight). Claim 8 is rejected for the same reason as discussed in claim 1 above in view of Spence also disclosing a computing device comprising: a processor ([0032]; [0405]; [0407]; [0410] - one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium); and a memory including instructions that, when executed with the processor, cause the computing device to perform the recited operations ([0032]; [0411]; [0413] – a memory storing computer readable instructions to be executed by the one or more processors) and the trigger mechanism of the media collection device is by a user of the media collection device ([0123]; [0452]; [0461] - a trigger mechanism via a switch on the camera during recording to place a tag for creating the manual highlight data). Claim 9 is rejected for the same reason as discussed in claim 2 above. Regarding claim 11, Spence also discloses at least one of the beginning time or the ending time is determined based on a time at which the at least one event is determined to have occurred ([0457]; [0461]; Figs. 14-17 – determining beginning time and ending time based on the time at which the peaks are identified and marked as `highlights`). Regarding claim 12, Spence also discloses the at least one of the beginning time or the ending time is determined to be a predetermined amount of time before the time at which the at least one event is determined to have occurred ([0461] - a highlight may comprise the preceding 5 seconds of video image data and the following 5 seconds of video image relative to the time associated with the tag, e.g. the identified and marked highlight). Regarding claim 13, Spence also discloses the predetermined amount of time is determined based on a type of the at least one event ([0114] – based on at least the manner in which the tag was created). Claim 16 is rejected for the same reason as discussed in claim 7 above. Regarding claim 17, Spence also discloses generating the second media data comprises duplicating data included in the first media data between the beginning time and the ending time ([0502] – duplicating the highlight portions for creating the highlight video file). Claim 18 is rejected for the same reason as discussed in claim 1 above in view of Spence also disclosing a non-transitory computer-readable media collectively storing computer- executable instructions that upon execution cause one or more computing devices to collectively perform the recited acts ([0032]; [0411]; [0413] – a memory storing computer readable instructions to be executed by the one or more processors). Regarding claim 21, see the teachings of Spence, Mahlmeister, and Han as discussed in claim 1 above. Han also discloses the first retention policy includes instructions to store the generated second media data for long-term storage in a first data store and the second retention policy includes instructions to store the received first media data for short-term storage in a second data store ([0064]; [0070]; [0074]-[0078] – a second retention policy includes instructions to store first media data on FIFO for short term while the first retention policy includes instructions to store generated media data of significant event for long term). The motivation for incorporating the teachings of Han into the method has been discussed in claim 1 above. Claim 22 is rejected for the same reason as discussed in claim 21 above. Regarding claim 23, Spence also discloses the media collection device comprises a camera mounted on an operator of the media collection device ([0086]; [0338]; Fig. 37 – a camera worn by a user of a media collection device), and a media processing device is configured to receive the media information from the media collection device and determine that the portion of the first media data is to be prioritized when the media collection device does not initiate recording via the activation of the trigger mechanism ([0094]; [0120]; [0122]-[0125] – receiving sensor data and determining, based on the sensor data, a portion of interest of the first media is to be prioritized, e.g. identified as a portion of interest to be used in a highlighted video file and based upon the portion of media being a particular spoken word or phrase to detect the times of media to be prioritized); and Han also discloses he first retention policy includes instructions to store the generated second media data for long-term storage in a first data store and the second retention policy includes instructions to store the received first media data for short-term storage in a second data store ([0064]; [0070]; [0074]-[0078] – a second retention policy includes instructions to store first media data on FIFO for short term while the first retention policy includes instructions to store generated media data of significant event for long term). The motivation for incorporating the teachings of Han into the method has been discussed in claim 1 above. Regarding claim 24, Spence also disclose the data pattern comprises at least one of one or more movements or one or more orientations of the media collection device ([0141]; [0143] – the sensors include barometer, accelerometer, gyroscope, electronic compass which are used to measure movements and/or orientations of the camera). Regarding claim 25, Spence also discloses at least one event comprises an action to be associated with an operator of the media collection device ([0085]-[0086]; [0121]; [0141]; [0143]; [0338]; [0488] – the cameras are carried by the user and housing sensors used to measure movements or other physical parameters of the user). Regarding claim 26, Spence also discloses categorizing and indexing the at least one event ([0457]; Figs. 16-17 – categorizing the events, clustering the events, and indexing, by sorting and ranking, the events). Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over Spence, Mahlmeister, and Han as applied to claims 1-4, 7-9, 11-13, 16-18, and 21-26 above, and further in view of Agarwal et al. (US 2018/0306609 A1 – hereinafter Agarwal). Regarding claim 10, see the teachings of Spence, Mahlmeister, and Han as discussed in claim 8 above. However, Spence, Mahlmeister, and Han do not disclose detecting the at least one event associated with the second media data comprises providing the sensor data to a machine learning model trained to correlate events with data patterns detected within the sensor data. Agarwal discloses detecting at least one event associated with media data comprises providing sensor data to a machine learning model trained to correlate events with data patterns detected within the sensor data ([0052]). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Agarwal into the computing device taught by Spence, Mahlmeister, and Han to identify the event accurately. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached on 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUNG Q DANG/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Jul 12, 2021
Application Filed
Dec 30, 2021
Non-Final Rejection — §103
Mar 30, 2022
Response Filed
Jun 02, 2022
Final Rejection — §103
Aug 22, 2022
Request for Continued Examination
Aug 29, 2022
Response after Non-Final Action
Jan 03, 2023
Non-Final Rejection — §103
Apr 03, 2023
Response Filed
May 22, 2023
Final Rejection — §103
Jul 17, 2023
Request for Continued Examination
Jul 19, 2023
Response after Non-Final Action
Nov 04, 2023
Non-Final Rejection — §103
Jan 09, 2024
Response Filed
Mar 12, 2024
Final Rejection — §103
Jun 13, 2024
Request for Continued Examination
Jun 25, 2024
Response after Non-Final Action
Nov 27, 2024
Non-Final Rejection — §103
Feb 20, 2025
Response Filed
Mar 08, 2025
Final Rejection — §103
Jun 02, 2025
Request for Continued Examination
Jun 06, 2025
Response after Non-Final Action
Nov 15, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594460
MANAGING BLOBS FOR TRACKING OF SPORTS PROJECTILES
2y 5m to grant Granted Apr 07, 2026
Patent 12588818
DETECTION OF A MOVABLE OBJECT WHEN 3D SCANNING A RIGID OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12592258
METHOD AND APPARATUS FOR INTERACTIVE VIDEO EDITING PLATFORM TO CREATE OVERLAY VIDEOS TO ENHANCE ENTERTAINMENT VIDEO GAMES WITH EDUCATIONAL CONTENT
2y 5m to grant Granted Mar 31, 2026
Patent 12587693
ARTIFICIALLY INTELLIGENT AD-BREAK PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574649
ENCODING AND DECODING METHOD, ELECTRONIC DEVICE, COMMUNICATION SYSTEM, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
68%
Grant Probability
87%
With Interview (+18.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 1841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month