Prosecution Insights
Last updated: April 19, 2026
Application No. 18/807,085

AUTOMATED CLASSIFICATION AND INDEXING OF EVENTS USING MACHINE LEARNING

Final Rejection §103§DP
Filed
Aug 16, 2024
Examiner
TEKLE, DANIEL T
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Whp Workflow Solutions Inc.
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
56%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
462 granted / 732 resolved
+5.1% vs TC avg
Minimal -7% lift
Without
With
+-6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
778
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
33.5%
-6.5% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 732 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-40 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of U.S. Patent No. 12,094,493 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because, see table as outlined below. Instant Application U.S. Patent No. 12,094,493 B2 21. A method comprising: receiving, by a media processing platform, a first dataset comprising media content recorded by a first recording device; receiving, by the media processing platform, a second dataset comprising information obtained from a second recording device; synchronizing data of the received first and the second datasets based on corresponding timestamp information associated with the first and the second datasets; determining whether an event occurred based on the synchronized data; generating an index corresponding to the event, the index comprising time information associated with the event; and appending the generated index to the media content via an event data track. 25. The method of claim 21, wherein the second recording device comprises an external device or a sensor installed within the first recording device. 27. The method of claim 26, further comprising receiving contextual data generated by a third-party service provider based on additional sensor data associated with the media content. 28. The method of claim 27, wherein the contextual data comprises vehicle data received from an onboard computer of a vehicle. 33. The method of claim 21, wherein an indication of the generated index is stored in a database table that is mapped to the media content. 34. The method of claim 33, further comprising storing the indexed media content in the database table. 1. A method comprising: receiving, from at least one recording device assigned to a person, a media content; receiving sensor data determined to correspond to the media content; receiving contextual data that are associated with the media content from a third-party service provider, the contextual data being generated by the third-party service provider based on additional sensor data that is received by an onboard computer of a vehicle assigned to the person and provided by the onboard computer to the third-party service provider; identifying one or more data patterns of the sensor data and the contextual data, wherein the data patterns are based on one or more of a movement of the recording device, an audio cue of one or more of the sensor data and the contextual data, or a particular object depicted in the sensor data and the contextual data; correlating the data patterns to at least one event included in the media content; generating an index corresponding to an identified event of the at least one event, the index comprising at least a begin time for the identified event; appending the index to the media content, wherein the index is appended to the media content via an event track that is separate from an audio track and video track of the media content; and storing the indexed media content. 22. The method of claim 21, wherein the media content is received from the first recording device in substantial real-time as streaming data. 2. The method of claim 1, wherein the media content is received from the at least one recording device in substantial real-time as streaming data. 23. The method of claim 21, wherein the media content is received from the first recording device as an upload after the first recording device has finished recording. 3. The method of claim 1, wherein the media content is received from the at least one recording device as an upload after the recording device has finished recording. 24. The method of claim 21, wherein the media content comprises at least one of video data or audio data. 4. The method of claim 1, wherein the media content comprises at least one of video data or audio data. 26. The method of claim 21, wherein the information obtained from the second recording device comprises sensor data obtained from at least one of a gyroscope, an accelerometer, or a compass. 5. The method of claim 1, wherein the sensor data comprises data obtained from at least one of a gyroscope, accelerometer, or compass. 29. The method of claim 28, wherein determining whether the event occurred comprises using at least one trained machine learning model configured to: identify one or more data patterns of the sensor data and the contextual data; correlate the identified one or more data patterns to at least a predefined event included in the media content; generate a corresponding likelihood value; and determine whether the event occurred based on the likelihood value. 9. The computing device of claim 8, wherein the context is determined using a first trained machine learning model and the identified event is identified using a second trained machine learning model. Claims 35-39 list all similar elements of claims 20-25. Therefore, the supporting rationale of the rejection to claims 20-25 applies equally as well to claims 35-39. Claim 40 list all similar elements of claim 20. Therefore, the supporting rationale of the rejection to claim 20 applies equally as well to claim 40. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21-40 are rejected under 35 U.S.C. 103 as being unpatentable over Acharya et al. US 2021/0124741 further in view of Balkus et al. US 2004/0268224. In regarding to claim 21 Acharya teaches: 21. A method comprising: receiving, by a media processing platform, a first dataset comprising media content recorded by a first recording device; Acharya 0024, 0026, 0030 receiving, by the media processing platform, a second dataset comprising information obtained from a second recording device; Acharya 0024, 0026, 0030 synchronizing data of the received first and the second datasets based on corresponding timestamp information associated with the first and the second datasets; Acharya 0024, 0026, 0030 determining whether an event occurred based on the synchronized data; Acharya 0024, 0026, 0030 However, Acharya fails to explicitly teach, but Balkus teaches: generating an index corresponding to the event, the index comprising time information associated with the event; and appending the generated index to the media content via an event data track. Balkus, 0035-0039, 0041, 0053, Fig. 3 and Fig. 7B Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Balkus with the system of Acharya in order generating an index corresponding to the event, the index comprising time information associated with the event; and appending the generated index to the media content via an event data track; as such, enables specification of the temporal and spatial relationships among the media and playback of the presentation with the specified temporal and spatial relationships..--Abstract. In regarding to claim 22 Acharya and Balkus teaches: 22. The method of claim 21, furthermore, Acharya teaches wherein the media content is received from the first recording device in substantial real-time as streaming data. Acharya 0026 In regarding to claim 23 Acharya and Balkus teaches: 23. The method of claim 21, furthermore, Acharya teaches wherein the media content is received from the first recording device as an upload after the first recording device has finished recording. Acharya 0024, 0026, 0030 In regarding to claim 24 Acharya and Balkus teaches: 24. The method of claim 21, furthermore, Acharya teaches wherein the media content comprises at least one of video data or audio data. Acharya 0024, 0026, 0030 In regarding to claim 25 Acharya and Balkus teaches: 25. The method of claim 21, furthermore, Acharya teaches wherein the second recording device comprises an external device or a sensor installed within the first recording device. Acharya 0024, 0026, 0030 In regarding to claim 26 Acharya and Balkus teaches: 26. The method of claim 21, furthermore, Acharya teaches wherein the information obtained from the second recording device comprises sensor data obtained from at least one of a gyroscope, an accelerometer, or a compass. Acharya, Fig. 3B 310 and 312 In regarding to claim 27 Acharya and Balkus teaches: 27. The method of claim 26, furthermore, Acharya teaches further comprising receiving contextual data generated by a third-party service provider based on additional sensor data associated with the media content. Acharya, 0036, 0054-0055 In regarding to claim 28 Acharya and Balkus teaches: 28. The method of claim 27, furthermore, Acharya teaches wherein the contextual data comprises vehicle data received from an onboard computer of a vehicle. Acharya, 0036, In regarding to claim 29 Acharya and Balkus teaches: 29. The method of claim 28, furthermore, Acharya teaches wherein determining whether the event occurred comprises using at least one trained machine learning model configured to: identify one or more data patterns of the sensor data and the contextual data; correlate the identified one or more data patterns to at least a predefined event included in the media content; generate a corresponding likelihood value; and determine whether the event occurred based on the likelihood value. Acharya, 0029, 0049, 0056 In regarding to claim 30 Acharya and Balkus teaches: 30. The method of claim 29, furthermore, Acharya teaches wherein the at least one trained machine learning model is further configured to compare the likelihood value to a predetermined threshold likelihood value. Acharya, 0029, 0049, 0056 In regarding to claim 31 Acharya and Balkus teaches: 31. The method of claim 29, furthermore, Acharya teaches wherein a trained machine learning model of the at least one trained machine learning model is configured to determine a context associated with the media content. Acharya, 0029, 0049, 0056 In regarding to claim 32 Acharya and Balkus teaches: 32. The method of claim 29, furthermore, Acharya teaches wherein a trained machine learning model of the one or more trained machine learning models is configured to identify the predefined event based on information associated with the media content. Acharya, 0029, 0049, 0056 In regarding to claim 33 Acharya and Balkus teaches: 33. The method of claim 21, furthermore, Acharya teaches wherein an indication of the generated index is stored in a database table that is mapped to the media content. Acharya, 0087, 0090 In regarding to claim 34 Acharya and Balkus teaches: 34. The method of claim 33, furthermore, Acharya teaches further comprising storing the indexed media content in the database table. Acharya, 0087, 0090 Claims 35-39 list all similar elements of claims 20-25. Therefore, the supporting rationale of the rejection to claims 20-25 applies equally as well to claims 35-39. Claim 40 list all similar elements of claim 20. Therefore, the supporting rationale of the rejection to claim 20 applies equally as well to claim 40. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL T TEKLE/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Jul 26, 2025
Non-Final Rejection — §103, §DP
Oct 28, 2025
Response Filed
Dec 19, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602804
Method for Processing Three-dimensional Scanning, Three-dimensional Scanning Device, and Computer-readable Storage Medium
2y 5m to grant Granted Apr 14, 2026
Patent 12603969
PARKING VIDEO RECORDING DEVICE, A TELEMATICS SERVER AND A METHOD FOR RECORDING A PARKING VIDEO
2y 5m to grant Granted Apr 14, 2026
Patent 12587615
MULTI-STREAM PEAK BANDWIDTH DISPERSAL
2y 5m to grant Granted Mar 24, 2026
Patent 12573430
INTERACTIVE VIDEO ACCESSIBILITY COMPLIANCE SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548219
SYSTEM AND METHOD FOR HIGH-RESOLUTION 3D IMAGES USING LASER ABLATION AND MICROSCOPY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
56%
With Interview (-6.9%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 732 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month