Prosecution Insights
Last updated: April 18, 2026
Application No. 18/933,164

SYSTEM FOR AUTOMATICALLY TRIGGERING A RECORDING

Final Rejection §103§DP
Filed
Oct 31, 2024
Examiner
ALCON, FERNANDO
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Digital Ally Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
529 granted / 725 resolved
+15.0% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
20 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 725 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1, 9, and 15 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 10,911,725. Although the claims at issue are not identical, they are not patentably distinct from each other because the features of the present claims are anticipated in the features of the corresponding claims in the parent patent. Claim 1, 9, and 15 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 10,911,725 in view of Siegler, II et al. (US 2011/0169637). Present Claims 1. A system for automatically recording an event, the system comprising: one or more sensors; a video camera configured to collect video data; at least one processor; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, perform a method of automatically recording the event, the method comprising: receiving from a sensor of the one or more sensors one or more indication that an indicia was detected; determining a location of the sensor; obtaining identification data based on the one or more indicia; activating the video camera at or proximate the location of the sensor to record the video data; and storing at least the location of the sensor and the video data from the video camera. U.S. Patent No. 10,911,725 1. A system for automatically recording an event, comprising: a sensor configured to collect a set of sensor data, wherein the set of sensor data is indicative of at least one of pressure, force, stress, strain, time, temperature, speed, acceleration, location, magnetism, voltage, and current; a video camera configured to collect a set of video data; a recording device manager, comprising: a data store; a processor; and one or more non-transitory computer-readable media storing computer-executable instructions, that, when executed by the processor, perform the steps of: receiving the set of sensor data from the sensor; detecting a triggering event from the set of sensor data; determining, by the processor, that the triggering event is a low-level triggering event: transmitting wirelessly, in response to the detection of the triggering event, a signal from the recording device manager to the video camera, wherein the signal instructs the video camera to begin collecting the set of video data to tack the triggering event; performing at least one of object recognition and facial recognition on the video data to determine a threat level of the triggering event; storing, in a set of metadata data, the set of sensor data collected at the sensor; determining, by the processor, that the triggering event is a threat; storing information indicative of the at least one object recognition and facial recognition to the set of metadata data based on the determination that the triggering event is a threat; embedding the set of metadata data in the video data at the recording device manager; and storing, in the data store, the set of video data including the embedded set of metadata data. U.S. Patent No. 10,911,725 claim 1 does not explicitly disclose determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera. Siegler discloses that it was known to determine a sensor location and activating a video camera at or proximate a location of the sensor (See [0038] implementing an alert response including activating a video camera to record activities within or proximate to the detection location. See [0035] sensor notification includes location or position of sensor that generated the notification and or location of the triggering event). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the known system of Enright with the known methods of Siegler predictably resulting in determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of recording relevant video and location information in a vicinity of a triggered sensor as suggested by Siegler. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-10, and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Enright et al. (US 6,583,813) in view of Siegler, II et al. (US 2011/0169637) and in further view of Sarhan (WO 2010/144566 A1). Regarding claim 1, Enright discloses a system for automatically recording an event, the system comprising: one or more sensors (See hard device sensor Col 40 lines 20-65); a video camera configured to collect video data (See Col 11 lines 20-60 cameras); at least one processor (See Col 12 line 3-36 computer/server); and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, perform a method of automatically recording the event (See Col 12 line 1-40 and Col 13 line 35-Col 14 line 65), the method comprising: receiving from a sensor of the one or more sensors an indication that one or more indicia was detected (See Col 54 line 20-50 sensors detecting presence of a user causing a triggering event. The input causes a sequence to begin including capturing images from cameras); obtaining identification data based on the one or more indicia (See Col 24 line 45-67, See Col 25 line 13- Col 26 line 67, Col 32 line 17-35 facial recognition of captured images); activating the video camera to record video data(See Col 24 line 45-67 capture of images from various cameras on a continuing basis) storing 6 15-30 making images searchable based on time periods, see Col 14 storing image data with file records including transaction data such as time associated with activity; See Col 12 line 15-50 sensing conditions; See Col 40 lines 39-Col 41 line 36. See Col 20 line 40-55 time and date of transaction recorded). Enright does not explicitly disclose determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera. Siegler discloses that it was known to determine a sensor location and activating a video camera at or proximate a location of the sensor (See [0038] implementing an alert response including activating a video camera to record activities within or proximate to the detection location. See [0035] sensor notification includes location or position of sensor that generated the notification and or location of the triggering event). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the known system of Enright with the known methods of Siegler predictably resulting in determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of recording relevant video and location information in a vicinity of a triggered sensor as suggested by Siegler. Enright does not disclose determining a threat level associated with the identification data and the video data. Sarhan discloses that it was known to determine a threat level associated with identification of a subject and video data (See [0018] face matching a person on a watch list; see also [0022-0023] [0033]). Prior to the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the combination further with the known methods of Sarhan predictably resulting in determining a threat level associated with the identification data and the video data by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of adjusting parameters based on a threat level as suggested by Sarhan. Regarding claim 9, Enright discloses one or more non-transitory computer-readable media storing computer- executable instructions that, when executed by at least one processor, perform a method of automatically recording an event (See Col 12 line 1-40 and Col 13 line 35-Col 14 line 65), the method comprising: detecting an indicia by a sensor (See Col 54 line 20-50 sensors detecting presence of a user; Col 11 line 20-56; Col 40 lines 25-55); obtaining identification data based on the one or more indicia (See Col 24 line 45-67, See Col 25 line 13- Col 26 line 67, Col 32 line 17-35 facial recognition of captured images); wherein the identification data is indicative of an identity of a user (See Col 24 line 45-67, See Col 25 line 13- Col 26 line 67, Col 32 line 17-35 facial recognition of captured images); transmitting, by the sensor and to a recording device manager (See Col 12 lines 3-36, image recorder receives image data from cameras), an indication that the one or more indicia was detected by the sensor (See Col 54 line 20-50 sensors detecting presence of a user; See also Col 24 line 46-67, Col 25 line 1-67 ); activating at least one video camera to record video data (See Col 24 line 45-67 capture of images from various cameras on a continuing basis) storing Enright does not explicitly disclose determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera. Siegler discloses that it was known to determine a sensor location and activating a video camera at or proximate a location of the sensor (See [0038] implementing an alert response including activating a video camera to record activities within or proximate to the detection location. See [0035] sensor notification includes location or position of sensor that generated the notification and or location of the triggering event). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the known system of Enright with the known methods of Siegler predictably resulting in determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of recording relevant video and location information in a vicinity of a triggered sensor as suggested by Siegler. Enright does not disclose determining a threat level associated with the identification data and the video data. Sarhan discloses that it was known to determine a threat level associated with identification of a subject and video data (See [0018] face matching a person on a watch list; see also [0022-0023] [0033]). Prior to the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the combination further with the known methods of Sarhan predictably resulting in determining and storing a threat level associated with the identification data and the video data by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of adjusting parameters based on a threat level as suggested by Sarhan. Regarding claim 15, Enright discloses a method of automatically recording an event, the method comprising: detecting the event by a first sensor and recording first sensor data (See hard device sensor Col 40 lines 20-65; See Col 54 line 20-50 sensors detecting presence of a user; Col 11 line 20-56; Col 40 lines 25-55. See Col 25 line 1-67 analyzing and storing the image data); transmitting, by the first sensor and to a recording device manager(See Col 12 lines 3-36, image recorder receives image data from cameras), an indication that the event was detected by the first sensor (See Col 54 line 20-50 sensors detecting presence of a user; See also Col 24 line 46-67, Col 25 line 1-67); activating at least one second sensor to record second sensor data(See Col 24 line 45-67 capture of images from various cameras on a continuing basis) storing Enright does not explicitly disclose determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera. Siegler discloses that it was known to determine a sensor location and activating a video camera at or proximate a location of the sensor (See [0038] implementing an alert response including activating a video camera to record activities within or proximate to the detection location. See [0035] sensor notification includes location or position of sensor that generated the notification and or location of the triggering event). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the known system of Enright with the known methods of Siegler predictably resulting in determining a location of the sensor; activating the video camera at or proximate the location of the sensor; and storing the location of the sensor metadata and the video data from the video camera by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of recording relevant video and location information in a vicinity of a triggered sensor as suggested by Siegler. Enright does not disclose analyzing the first sensor data and the second sensor data to determine a threat level. Sarhan discloses that it was known to determine a threat level associated with identification of a subject and video data (See [0018] face matching a person on a watch list; see also [0022-0023] [0033]). Prior to the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the combination further with the known methods of Sarhan predictably resulting in determining and storing a threat level associated with the first sensor data and the second sensor data by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of adjusting parameters based on a threat level as suggested by Sarhan. Regarding claim 3, Enright Siegler and Sarhan disclose the system of claim 1, wherein the sensor is a first sensor; further comprising a second sensor, wherein the method further comprises activating the second sensor or storing second sensor data based on the indication that the one or more indicia was detected (See Siegler [0041] using multiple sensors to triangulate a location). Regarding claim 4, Enright Siegler and Sarhan disclose the system of claim 1, wherein the location of the sensor is determined by indicia data associated with one or more the indicia and stored in a database (See Siegler [0035] location information in sensor notification. See Enright Col 6 15-30 making images searchable based on time periods, see Enright Col 14 storing image data with file records including transaction data such as time associated with activity). Regarding claim 5, Enright Siegler and Sarhan further disclose the system of claim 1, wherein the method further comprises: notifying a third party that the one or more indicia was detected; and transmitting an image of a package associated with the one or more indicia to the third party (See Enright Col 21 line 20-41 acquiring image data and sending a message to police or other authorities, See also Col 28 line 35-50. See Col 44 line 50-65 emailing messages and or images.). Regarding claim 6, Enright Siegler and Sarhan further disclose the system of claim 1, wherein the method further comprises: determining a time that the one or more indicia was scanned; and storing the time and the location as metadata with the location (See Siegler [0035] location information in sensor notification. See Enright Col 6 15-30 making images searchable based on time periods, see Enright Col 14 storing image data with file records including transaction data such as time associated with activity). Regarding claim 7, Enright Siegler and Sarhan further disclose the system of claim 1, wherein the method further comprises activating at least one electromechanical actuator based on the one or more indicia and the location of the sensor (See Enright Col 20 lines 5-25 automatic locking system). Regarding claim 8, Enright Siegler and Sarhan further disclose the system of claim 7, wherein the at least one electromechanical actuator actuates a lock (See Enright Col 20 lines 5-25 automatic locking system). Regarding claim 10, Enright Siegler and Sarhan further disclose the media of claim 9, wherein the method further comprises actuating at least one electromechanical actuator at or proximate the sensor (See Enright See Enright Col 20 lines 5-25 automatic locking system and Fig 1, the system located at an ATM); analyzing the video data to determine a person identity (See Sahran [0018] face matching); and determining the threat level based on comparing the person identity and the identification data (See Sahran determining a facial match to a person on a watch list or to a specific person [0018] [0022]). Regarding claim 16, Enright Siegler and Sarhan disclose the method of claim 15, wherein the event is one of detection of an indicia, detecting of a short-range communication tag, recognition of an object in video data, or detecting actuation of an electromechanical actuator (See Enright Col 24 line 46-67 detecting facial features of criminal, missing persons). Regarding claim 17, Enright Siegler and Sarhan disclose the method of claim 15, further comprising: notifying a third party of the detecting of the event; and transmitting an event indication indicative of the event to the third party (See Enright Col 21 line 20-41 acquiring image data and sending a message to police or other authorities, See also Col 28 line 35-50. See Col 44 line 50-65 emailing messages and or images.). Regarding claim 18, Enright Siegler and Sarhan disclose the method of claim 17, further comprising activating a video camera to record video data proximate the location of the first sensor (See [0038] implementing an alert response including activating a video camera to record activities within or proximate to the detection location. See [0035] sensor notification includes location or position of sensor that generated the notification and or location of the triggering event). Regarding claim 19, Enright Siegler and Sarhan disclose the method of claim 18, further comprising storing the event indication and the location as metadata with the video data (See Siegler [0035] location information in sensor notification. See Enright Col 6 15-30 making images searchable based on time periods, see Enright Col 14 storing image data with file records including transaction data such as time associated with activity). Claim(s) 2 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Enright et al. (US 6,583,813) in view of Siegler, II et al. (US 2011/0169637) in view of Sarhan (WO 2010/144566 A1) in view of Dannan et al. (US 2016/0173827 A1). Regarding claim 2 and 20, Enright and Siegler disclose the system of claim 1, but do not explicitly disclose wherein the location of the sensor is determined by a GPS receiver or a short-range communication tag associated with the sensor. Dannan discloses that it was known to determine a location of a sensor using GPS. (See [0038] GPS locators on IP cameras, See [0041] GPS location device on IP camera enables self geo-location). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the combination with the known methods of Dannan predictably resulting in the location of the sensor is determined by a GPS receiver or a short-range communication tag associated with the sensor by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of deriving location data using well known geo-location techniques. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Enright et al. (US 6,583,813) in view of Siegler, II et al. (US 2011/0169637) in view of Sarhan (WO 2010/144566 A1) in view of Lee et al. (US 2015/0176733). Regarding claim 11, Enright Siegler and Sarhan disclose the media of claim 10, but does not explicitly disclose wherein the actuating is initiated by the recording device manager based on the detecting of the indicia. Lee discloses that it was known to transmit wirelessly (See [0055] first and second cameras connected in wireless or wired manner), in response to detection of a triggering event, a signal from a recording device manager to a video camera, wherein the signal instructs the second sensor to begin collecting the second set of data (See [0055-0060] and [0069)). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the combination with the known methods of Lee predictably resulting in the actuating is initiated by the recording device manager based on the detecting of the indicia by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of ensuring all relevant triggering activity is captured in the viewing areas. Claim(s) 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Enright et al. (US 6,583,813) in view of Siegler, II et al. (US 2011/0169637) in view of Sarhan (WO 2010/144566 A1) in view of Lee et al. (US 2015/0176733) and in view of Harel (US 2010/0246669 A1). Regarding claim 12, Enright Siegler Sarhan and Lee disclose the media of claim 11, but does not explicitly disclose wherein the sensor comprises a short-range communication device and the transmitting by the sensor is by short-range communication. Harel discloses a short-range communication device and the transmitting by the sensor is by short-range communication (See [0048] short range communications such as Bluetooth and Wi-Fi; triggering upload based). Prior to the effective filing date of the invention it would have been obvious to one ordinary skill in the art to modify the combination with the known methods of Harel predictably resulting in a short-range communication device and the transmitting by the sensor is by short-range communication by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of ensuring all relevant triggering activity is captured in the viewing areas. Regarding claim 13, Enright Siegler Sarhan Lee and Harel the media of claim 12, further comprising actuating the at least one electromechanical actuator by transmission over the short-range communication (See Enright Col 20 lines 5-25 automatic locking system). Regarding claim 14, Enright Siegler Sarhan Lee and Harel disclose the media of claim 13, determining a first time that the indicia was detected and a second time that the at least one electromechanical actuator was actuated; and storing the first time, the second time, and the location as metadata with the video data (see Enright Col 14 storing image data with file records including transaction data such as time associated with activity). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FERNANDO ALCON whose telephone number is (571)270-5668. The examiner can normally be reached Monday-Friday, 9:00am-7:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FERNANDO . ALCON Examiner Art Unit 2425 /FERNANDO ALCON/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Oct 31, 2024
Application Filed
Oct 03, 2025
Non-Final Rejection — §103, §DP
Feb 25, 2026
Interview Requested
Mar 04, 2026
Examiner Interview Summary
Mar 04, 2026
Examiner Interview (Telephonic)
Mar 06, 2026
Response Filed
Apr 06, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597166
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12588580
RESIDUE MONITORING AND RESIDUE-BASED CONTROL
2y 5m to grant Granted Mar 31, 2026
Patent 12581154
METHOD, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT FOR VIDEO INFORMATION DISPLAY
2y 5m to grant Granted Mar 17, 2026
Patent 12574601
PROGRAM RECEIVING DISPLAY DEVICE AND PROGRAM RECEIVING DISPLAY CONTROL METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12574594
SYSTEMS AND METHODS FOR CONTROLLING MEDIA CONTENT BASED ON USER PRESENCE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
82%
With Interview (+8.9%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 725 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month