Prosecution Insights
Last updated: April 19, 2026
Application No. 18/947,426

EVENT DETECTION MODULE CONTROL METHOD IN PARKING RECORDING MODE FOR REDUCING POWER CONSUMPTION, EVENT DETECTION MODULE CONTROL SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM

Non-Final OA §102§103
Filed
Nov 14, 2024
Examiner
DHILLON, PUNEET S
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Thinkware Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
232 granted / 281 resolved
+24.6% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
41 currently pending
Career history
322
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 281 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of claims 1-13 in the reply filed on 02/23/2026 is acknowledged. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) ELEMENT IN CLAIM FOR A COMBINATION.—An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: imaging unit (also found in claims 12-13), event detection unit (also found in claims 5, 8-10, 12-13), parking environment analysis unit (also found in claims 3-4, 11), image processing unit, power supply unit, controller (also found in claims 5-10, 12-13) in claim 1. Additional claim limitations: detection period adjusting unit (claim 5), output adjusting unit (claims 8-10). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Smith (US 2018/0204334 A1). As per claim 1, Smith discloses an event detection module control system in a parking recording mode for reducing power consumption in a vehicle (Smith: Abstract.), the event detection module control system comprising: a camera (101) including an imaging unit (301) configured to capture a video, an event detection unit configured to detect an event (detecting objects surrounding a vehicle), a parking environment analysis unit configured to analyze a parking environment (Smith: Paras. [0065]- [0067] disclose detecting objects surrounding a vehicle and analyzing a parking environment), and a camera connector configured to interface with a main body (Smith: Figs. 5-6a & Paras. [0009]-[0010], [0032], [0035] disclose each camera pod including two cameras with a lens assembly 301 configured to capture a video and a field programmable gate array (FPGA) that performs coordinated differential detection to identify an approximate position for an object surrounding the vehicle, wherein each camera includes I/O connectors 304 soldered to baseboard 300, which may carry power and signals from CRU 103 [main body].); and the main body (CRU 103) including an image processing unit configured to receive and process the video captured by the imaging unit, a power supply unit configured to supply power for an operation of the event detection module control system, a controller configured to control an operation of the power supply unit, and a main body connector (604) configured to interface with the camera (Smith: Paras. [0027], [0041]-[0042], [0044] disclose a Central Recording Unit 103 that supplies power to pods 101, send instructions to pods 101, receive streaming video and perform CDD. Further, a regulator 607 regulates the battery voltage for feeding power to the camera pods via connectors 604. Furthermore, the FPGA receives video and/or image data and performs the coordinated differential detection.), wherein, in the parking recording mode, the controller analyzes the video captured by the imaging unit through the parking environment analysis unit and controls power of the imaging unit and the event detection unit to be turned on or off according to a parking environment of the vehicle (Smith: Paras. [0049], [0052]-[0053], [0065] disclose in the park mode of operation, images may be sampled and the CDD may be used to determine when to record full video of scenery of interest or trigger full video from the pair of cameras. However, if the object does not reach a threshold, the video may be discarded and the cameras returned to park mode and may shut down completely if the current battery voltage crosses the low-voltage threshold.). As per claim 2, Smith discloses the event detection module control system of claim 1, wherein the camera includes a first camera configured to capture a front video of the vehicle, and a second camera configured to capture a rear video of the vehicle, and the first camera and the second camera are independently controlled according to the parking environment of the vehicle (Smith: Fig. 2a & Paras. [0028], [0039], [0041], [0065] disclose controlling each camera's mode of operation and an operation to trigger full video from the pair of cameras in park mode based on detected motion within the environment, wherein the system utilizes differential pairs, one for the front pod and one for the rear.). As per claim 3, Smith discloses the event detection module control system of claim 2, wherein the parking environment analysis unit is controlled to analyze whether there is an object adjacent to a front or rear of the vehicle (Smith: Paras. [0030], [0041], [0062] disclose the FPGA may perform coordinated differential detection for each of the primary and secondary sectors of convergence to identify objects and that secondary sectors of convergence created by non-adjacent cameras correspond to the front and rear areas of the vehicle as shown in FIG. 2b. Therefore, arranging pods with one for the front pod and one for the rear to facilitate this specific coverage.). As per claim 4, Smith discloses the event detection module control system of claim 3, wherein the parking environment analysis unit is controlled to further analyze a distance between the vehicle and the object adjacent to the front or rear of the vehicle (Smith: Paras. [0030], [0042] disclose determining the position of object 225, including its distance from motor vehicle 100 including one for the front pod and one for the rear to surveil those adjacent areas.). As per claim 11, Smith discloses the event detection module control system of claim 3, wherein the parking environment analysis unit is controlled to further analyze whether the object adjacent to the front or rear of the vehicle is a fixed structure (Smith Paras. [0066]-[0067] disclose the SBC 613 identifies stationary objects in proximity to a pair of cameras, such as a post or a parked vehicle. A post is considered a fixed structure, and objects in proximity to the cameras (which surveil the area around the motor vehicle, [0009]) correspond to objects adjacent to the front or rear of the vehicle.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Smith in view of Hill et al., hereinafter referred to as Hill (WO 2025/088312 A1). As per claim 5, Smith discloses the event detection module control system of claim 4, wherein the main body further includes a detection period adjusting unit configured to adjust a detection period of the event detection unit (Smith: Paras. [0049], [0052] disclose adjusting the frame rate at which the system checks for detected events.), and the controller controls the detection period adjusting unit to adjust (Smith: Paras. [0049], [0052] disclose adjusting the frame rate at which the system checks for detected events.). However, Smith does not explicitly disclose “… adjust at least one of a signal frequency modulation time (chirp time), an idle time, and a number of signal frequencies (number of chirps) of the event detection unit according to the distance between the vehicle and the object adjacent to the front or rear of the vehicle.”. Further, Hill is in the same field of endeavor and teaches adjust at least one of a signal frequency modulation time (chirp time), an idle time, and a number of signal frequencies (number of chirps) of the event detection unit according to the distance between the vehicle and the object adjacent to the front or rear of the vehicle (Hill: Pg. 7, ll. 21-37, Pg. 13. ll. 19-33, Pg. 17. ll 20-24, Pg. 18, ll. 21-29 disclose using radar (LFMCW) to detect object distance and movement, and adjusting the radar operation resolution (rates such as 8Hz versus 16Hz) [signal frequencies] and sampling based on threat criteria such as object approach. For example, an object moves towards the system (i.e., the distance decreases), the system switches from Low Power Security Mode (operating at 8Hz) to Security Mode (operating at 16Hz). This transition increases the number of signal frequencies (rate) and reduces the idle time, moving from periodic sampling to continuous or higher resolution sampling.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Smith and Hill before him or her, to modify the object event detection system of Smith to include the adjusting signal frequencies according to object-to-vehicle distance feature as described in Hill. The motivation for doing so would have been to improve conservation of network bandwidth by providing a configuration that reduces data transmission by lowering power consumption of the overall system. As per claim 6, Smith-Hill disclose the event detection module control system of claim 5, wherein the controller controls to set the signal frequency modulation time (chirp time) to be short, the idle time to be long, or the number of signal frequencies (number of chirps) to decrease as the distance between the vehicle and the object adjacent to the front or rear of the vehicle increases (Hill: Pg. 7, ll. 21-37, Pg. 13. ll. 19-33, Pg. 17. ll 20-24, Pg. 18, ll. 21-29 disclose as an object moves away from the system (i.e., the distance increases), the system switches from Security Mode (operating at 16Hz) to Low Power Security Mode (operating at 8Hz). This transition lowers the number of signal frequencies (rate) and increases the idle time, moving from continuous or higher resolution sampling to periodic sampling to continuous or higher resolution sampling.). As per claim 7, Smith-Hill disclose the event detection module control system of claim 5, wherein the controller controls to set the signal frequency modulation time (chirp time) to be long, the idle time to be short, or the number of signal frequencies to increase as the distance between the vehicle and the object adjacent to the front or rear of the vehicle decreases (Hill: Pg. 7, ll. 21-37, Pg. 13. ll. 19-33, Pg. 17. ll 20-24, Pg. 18, ll. 21-29 disclose as an object moves towards the system (i.e., the distance decreases), the system switches from Low Power Security Mode (operating at 8Hz) to Security Mode (operating at 16Hz). This transition increases the number of signal frequencies (rate) and reduces the idle time, moving from periodic sampling to continuous or higher resolution sampling.). As per claim 8, Smith-Hill disclose the event detection module control system of claim 4, wherein the main body further includes an output adjusting unit configured to adjust an output of the event detection unit, and the controller controls the output adjusting unit to increase or decrease a power intensity of the event detection unit according to the distance between the vehicle and the object adjacent to the front or rear of the vehicle (Smith: Paras. [0049], [0052], [0066]-[0067] disclose controlling the camera pods in a park mode to operate periodically to save power, such as sampling images at 1 frame per second or waking at preset time intervals (e.g., 5-minute intervals) and Hill: Pg. 7, ll. 21-37 disclose the RADAR can detect the presence of an object within a perimeter security region and 'wake up' the device into a higher power state giving more functionality, such as video recording and the like [controls the output adjusting unit to increase or decrease a power intensity of the event detection unit according to the distance between the vehicle and the object adjacent to the front or rear of the vehicle].). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Smith in view of Hill in further view of Tabata (US 2020/0369220 A1). As per claim 9, Smith discloses the event detection module control system of claim 8, wherein the controller controls the output adjusting unit to decrease the power intensity of the event detection unit (Smith: Paras. [0049], [0052], [0066]-[0067] disclose controlling the camera pods in a park mode to operate periodically to save power, such as sampling images at 1 frame per second or waking at preset time intervals (e.g., 5-minute intervals) and Hill: Pg. 7, ll. 21-37 disclose the RADAR can detect the presence of an object within a perimeter security region and 'wake up' the device into a higher power state giving more functionality, such as video recording and the like [controls the output adjusting unit to increase or decrease a power intensity of the event detection unit according to the distance between the vehicle and the object adjacent to the front or rear of the vehicle].). However, Smith-Hill do not explicitly disclose “… to decrease the power intensity of the event detection unit as the distance between the vehicle and the object adjacent to the front or rear of the vehicle decreases.”. Further, Tabata is in the same field of endeavor and teaches to decrease the power intensity of the event detection unit as the distance between the vehicle and the object adjacent to the front or rear of the vehicle decreases (Tabata: Paras. [0037], [0071] disclose when the recording function control unit 123 does not detect a peripheral object within a predetermined distance, all the cameras of the camera unit 210 capture an image and upon detection of a surrounding object, only a camera enabled to capture video in a direction along which another vehicle may possibly approach is actuated. As a result, parking can be monitored with reduced power consumption and for a longer length of time. Therefore, when the object is close, the specific camera is disabled or operates at lower utility, and when the object is not detected/is far, all cameras are enabled for image capture, effectively increasing the power/output intensity as the distance increases.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Smith-Hill and Tabata before him or her, to modify the object event detection system of Smith-Hill to include the adjusting power intensity based on distance feature as described in Tabata. The motivation for doing so would have been to improve power consumption management of object event detection systems by providing a configuration that enhances efficiency pertinent to image capture and detection. As per claim 10, Smith-Hill-Tabata discloses the event detection module control system of claim 8, wherein the controller controls the output adjusting unit to increase the power intensity of the event detection unit as the distance between the vehicle and the object adjacent to the front or rear of the vehicle increases (Tabata: Paras. [0037], [0071] disclose when the recording function control unit 123 does not detect a peripheral object within a predetermined distance, all the cameras of the camera unit 210 capture an image and upon detection of a surrounding object, only a camera enabled to capture video in a direction along which another vehicle may possibly approach is actuated. As a result, parking can be monitored with reduced power consumption and for a longer length of time. Therefore, when the object is close, the specific camera is disabled or operates at lower utility, and when the object is not detected/is far, all cameras are enabled for image capture, effectively increasing the power/output intensity as the distance increases.). Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Smith in view of Kojima et al., hereinafter referred to as Kojima (JP-2015088794-A). As per claim 12, Smith discloses the event detection module control system of claim 3, wherein the controller controls power supply of the imaging unit and the event detection unit of the first camera to be turned off (Smith: Paras. [0049], [0052]-[0053], [0065] disclose in the park mode of operation, images may be sampled and the CDD may be used to determine when to record full video of scenery of interest or trigger full video from the pair of cameras. However, if the object does not reach a threshold, the video may be discarded and the cameras returned to park mode and may shut down completely if the current battery voltage crosses the low-voltage threshold.). However, Smith does not explicitly disclose “… the first camera to be turned off when there is a fixed structure adjacent to the front of the vehicle … the second camera to be turned off when there is a fixed structure adjacent to the rear of the vehicle.”. Further, Kojima is in the same field of endeavor and teaches the first camera to be turned off when there is a fixed structure adjacent to the front of the vehicle and the second camera to be turned off when there is a fixed structure adjacent to the rear of the vehicle. (Kojima: Paras. [0069]-[0070], [0075], [0078] disclose where an object that has not moved is often a fixed object and that the rear of the garage is a wall. In such instances, the detection interval is increased and power consumption can be further reduced (some of the sonars may be stopped). For example, only the camera 32 having the imaging range in the direction determined by the sonar 20 as having changed may be operated. In other words, controlling the power supply/operation when a camera facing a direction with no change (i.e., facing a fixed structure/wall) is not operated (turned off), while the camera facing a direction with activity is operated.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Smith and Kojima before him or her, to modify the object event detection system of Smith to include the fixed structure turning off cameras feature as described in Kojima. The motivation for doing so would have been to improve operational efficiency and reduced cost of object detection systems by providing a configuration that reduces power consumption. As per claim 13, Smith-Kojima disclose the event detection module control system of claim 12, wherein the controller controls the power supply of the imaging unit and the event detection unit of the first camera to be periodically turned on at preset time intervals when there is a fixed structure adjacent to the front of the vehicle and controls the power supply of the imaging unit and the event detection unit of the second camera to be periodically turned on at preset time intervals when there is a fixed structure adjacent to the rear of the vehicle (Smith: Paras. [0049], [0052], [0066]-[0067] disclose controlling the camera pods in a park mode to operate periodically to save power, such as sampling images at 1 frame per second or waking at preset time intervals (e.g., 5-minute intervals) and Kojima: Paras. [0062], [0070], [0072], [0078] disclose the sonar 20 periodically detects the wall (vehicle peripheral situation), and operates the camera only when the vehicle peripheral situation changes.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be viewed in the list of references. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEET DHILLON whose telephone number is (571)270-5647. The examiner can normally be reached M-F: 5am-1:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V. Perungavoor can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PEET DHILLON/Primary Examiner Art Unit: 2488 Date: 03-04-2026
Read full office action

Prosecution Timeline

Nov 14, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598346
A DISPLAY DEVICE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12567263
IMAGING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12548338
OBJECT SAMPLING METHOD AND IMAGE ANALYSIS APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12536812
CAMERA PERCEPTION TECHNIQUES TO DETECT LIGHT SIGNALS OF AN OBJECT FOR DRIVING OPERATION
2y 5m to grant Granted Jan 27, 2026
Patent 12537911
VIDEO PROCESSING APPARATUS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+18.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 281 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month