Prosecution Insights
Last updated: April 19, 2026
Application No. 17/951,139

ON-VEHICLE RECORDING CONTROL APPARATUS AND RECORDING CONTROL METHOD

Final Rejection §103
Filed
Sep 23, 2022
Examiner
SALEH, ZAID MUHAMMAD
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Jvckenwood Corporation
OA Round
4 (Final)
65%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
28 granted / 43 resolved
+3.1% vs TC avg
Strong +48% interview lift
Without
With
+48.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1 and 6 are amended Claims 1 – 6 remain pending. Response to Arguments Applicant's arguments filed November 12, 2025 with respect to claims 1 – 6 have been considered but are moot because the new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 6 are rejected under 35 U.S.C 103 as being unpatentable over Hayashi et al. Patent Application Publication No. JP-2019212110-A (hereinafter Hayashi) in view of Mimar Patent Application Publication No. US-20140139655-A1 (hereinafter Mimar), and further in view of Agrawal US Patent Application Publication No. WO2021155294A1 (hereinafter Agrawal) and Davis Patent Publication No. US-11887386-B1 (hereinafter Davis). Regarding claim 1, Hayashi discloses an on-vehicle recording control apparatus comprising: a video data acquisition unit that acquires first video data and second video data, the first video data being captured by a first imaging unit that captures an image of surroundings of a vehicle, the second video data being captured by a second imaging unit that captures an image of inside of the vehicle (In [0015] Hayashi discloses about the first camera that captures surroundings of vehicle, “first camera 210 that captures the surroundings of the vehicle facing the front of the vehicle” and in [0022] it is disclosed that the first camera is composed of plurality of camera (second video data) that captures the interior side of the vehicle, “first camera 210 may be composed of a plurality of cameras... arbitrary combination for photographing..., interior of the vehicle, and the like may be used”); an orientation detection unit that detects, from the second video data, an orientation of one of a face and a line of sight of a driver of the vehicle, and determines whether a first condition that the driver faces a direction other than a traveling direction of the vehicle is met (In [0049] Hayashi discloses about face detection unit 129 (orientation detection unit) detects the line of sight of the human face (orientation of one of a face and a line of sight of a driver). Furthermore, in [0048] Hayashi disclosed, “detection unit 129 determines whether the person's face detected from the second imaging data is simply a person at that position or whether he is watching the display surface 261 of the display unit 260” which implies to detecting if the driver faces a direction other than a travelling direction). Hayashi doesn’t disclose the following limitation as further recited in the claim. Mimar discloses if the orientation detection unit determines that the first condition is met, the recording control unit adds an event recording start flag to the normal recording data first video data to the period in which the first condition is met(Mimar in [0053] discloses, “Facial processing is used to monitor and detect driver distractions and drowsiness. The face gaze direction of driver is analyzed as a function of speed and cornering to monitor driver distraction and level of eyes closed and head angle is analyzed to monitor drowsiness, and when distraction or drowsiness is detected for a given speed, warning is provided to the driver immediately for accident avoidance. Such occurrences of warning are also stored along with audio-video for optional driver analytics”). It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Mimar into the system of Hayashi because it would allow the system to record the driver’s inattentive behavior that led to the event. Hayashi and Mimar in the combination doesn’t disclose the following limitation as further recited in the claim. Agrawal discloses an event detection unit that detects occurrence of an event if an acceleration that is applied to the vehicle is equal to or larger than a threshold (Agrawal in [0004] discloses, “include hard accelerations, may increase the costs associated with operating a vehicle” wherein determining if the acceleration is hard implies to determining if the acceleration applied was equal to or larger than a threshold); and if an event is detected while the first condition is met, the recording control unit stores, as event recording data, the first video data and the second video data since a period of time from the event recording start flag until a lapse of a predetermined period after at least the event detection time point (Agrawal in [0019] discloses, “FIG. 6 illustrates an example of a driver looking away from a road for an extended period of time after coming to a complete stop at a red light”. Furthermore, Agrawal in [0057] discloses, “Detection of certain driving events may include detecting a moment at which a violation was committed and may further include typical contextual time before and after that moment ... The stop sign violation event, however, may include a twelve second period before the identified time, as well as five second afterwards. A typical video data record of the event might encompass these 17 seconds”). It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Agrawal into the system of Hayashi in view of Mimar because it would improve post event analysis and better capture of events likely caused by distraction. Hayashi, Mimar and Agrawal in the combination doesn’t disclose the following limitation as further recited in the claim. Davis discloses a recording control unit that records, the first video data and the second video data as overwritable normal recording data (Davis in [Column – 1, Line 50 – 53] discloses, “the disclosed systems utilize an in-cabin media capture device to capture and store media recordings in an over-write loop portion of memory”. Furthermore, Davis in [Column – 3, Line 26 – 29] discloses, “the in-cabin media capture device can capture and record digital video portraying events within and outside a transportation vehicle”). It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Davis into the system of Hayashi in view of Mimar and Agrawal because overwriting would the system to record most recent driving state by automatically replacing the old data with new one when storage capacity is reached. Summary of Citations (Mimar) Paragraph [0053]; “Facial processing is used to monitor and detect driver distractions and drowsiness. The face gaze direction of driver is analyzed as a function of speed and cornering to monitor driver distraction and level of eyes closed and head angle is analyzed to monitor drowsiness, and when distraction or drowsiness is detected for a given speed, warning is provided to the driver immediately for accident avoidance. Such occurrences of warning are also stored along with audio-video for optional driver analytics”. Summary of Citations (Davis) [Column – 1, Line 50 – 53]; “the disclosed systems utilize an in-cabin media capture device to capture and store media recordings in an over-write loop portion of memory”. [Column – 3, Line 26 – 29]; “the in-cabin media capture device can capture and record digital video portraying events within and outside a transportation vehicle”. Summary of Citations (Hayashi) Paragraph [0009]; “a first camera photographing a periphery of a moving body, an event detecting step of detecting an event for the moving body, When detected, a recording step of storing at least first photographing data including the time of occurrence of the event as event record data, and a second photographing data photographed by a second camera photographing in a direction facing the display surface of the display unit”. Paragraph [0015]; “The recording / reproducing device 10 is mounted with the first camera 210 that captures the surroundings of the vehicle facing the front of the vehicle, but may be mounted facing the rear or side of the vehicle”. Paragraph [0022]; “In FIG. 1, the first camera 210 is shown as a single camera, but the first camera 210 may be composed of a plurality of cameras. For example, a plurality of cameras of an arbitrary combination for photographing the front, rear, side, interior of the vehicle, and the like may be used”. Paragraph [0030]; “The sensor 270 is, for example, an acceleration sensor, and detects the acceleration applied to the recording / reproducing device 10 or the vehicle. The sensor 270 is, for example, a three-axis acceleration sensor, and detects acceleration applied to the vehicle in the front-rear direction as the x-axis direction”. Paragraph [0037]; “In response to the event detection unit 127 determining that an event has occurred, the recording control unit 123 saves the first photographing data for a predetermined period including the time when the event occurred as event record data for which overwriting is prohibited”. Paragraph [0038]; “The method of storing the event record data by the recording control unit 123 is arbitrary. For example, the overwriting prohibition flag is added to the header or the payload of the section in which the overwriting is prohibited in the first photographing data, and is stored in the recording unit 240”. Paragraph [0044]; “The event detection unit 127 detects, as the acceleration corresponding to the event, the acceleration corresponding to the acceleration when the acceleration output from the sensor 250 collides with the vehicle and another object such as another vehicle. The detection of the acceleration corresponding to the event may be weighted in each of the x-axis direction, the y-axis direction, and the z-axis direction. Further, the detection of the acceleration corresponding to the event may be an acceleration whose acceleration rises steeply”. Paragraph [0048]; “Further, the face detection unit 129 detects the degree of eye opening in the human face detected from the second image data. In other words, the face detection unit 129 determines whether the person's face detected from the second imaging data is simply a person at that position or whether he is watching the display surface 261 of the display unit 260”. Paragraph [0049]; “In addition, the face detection unit 129 detects the line of sight of the human face detected from the second image data. In other words, the face detection unit 129 recognizes an eye portion from the face of the person detected from the second photographing data, and detects the person detected from the second photographing data based on the positional relationship between the recognized eyes and the iris. It is determined whether or not the line of sight is directed to the display surface 261. When the second camera 220 is a camera that captures an image in the infrared region, the line of sight is detected based on the reflection of the pupil and the cornea”. Paragraph [0055]; “For example, when an event is detected at time T1 from time t-1 to time t, the first imaging data in a period from a predetermined time before the time T1 that is the event occurrence time to after a predetermined time after the time T1 has elapsed. Is stored as event record data. The predetermined time is, for example, from 30 seconds before the time T1 which is the event occurrence time to 30 seconds after the time T1, but is not limited to this”. Summary of Citations (Agrawal) Paragraph [0004]; “Unsafe driving behavior may also lead to accidents, which may cause physical harm, and which may, in turn, lead to an increase in insurance rates for operating a vehicle. Inefficient driving, which may include hard accelerations, may increase the costs associated with operating a vehicle”. Paragraph [0019]; “FIG. 6 illustrates an example of a driver looking away from a road for an extended period of time after coming to a complete stop at a red light”. Paragraph [0037]; “may assess the driver's behavior in several contexts and perhaps using several metrics. FIG. 2 illustrates a system of driver monitoring, which may include a system for determining and/or providing alerts to an operator of a vehicle, in accordance with aspects of the present disclosure ... safe lane changes and lane position 268 , hard accelerations including turns 270 , responding to traffic officers, responding to road conditions 272 , and responding to emergency vehicles”. Paragraph [0057]; “Detection of certain driving events may include detecting a moment at which a violation was committed and may further include typical contextual time before and after that moment ... The stop sign violation event, however, may include a twelve second period before the identified time, as well as five second afterwards. A typical video data record of the event might encompass these 17 seconds”. Paragraph [0081]; “an ADAS may operate with reduced thresholds in such environments, such that, relative to other contexts, a shorter period of looking away from the direction of travel may be sufficient to trigger an audio alert”. Regarding claim 6, method claim 6 corresponds to apparatus claim 1. Therefore, the rejection of claim 1 is applicable to claim 6. Claim 2 is rejected under 35 U.S.C 103 as being unpatentable over Hayashi in view of Mimar, Agrawal and Davis and further in view of Matsura Patent Publication No. JP-2019091272-A (hereinafter Matsura). The ground of rejection based on Matsura from previous non-final Office Action of 11/07/2024 applies in here. Claims 3 – 5 are rejected under 35 U.S.C 103 as being unpatentable over Hayashi in view of Mimar, Agrawal and Davis and further in view of Muller US Patent Application Publication No. US2020232807A1 (hereinafter Muller). The ground of rejection based on Muller from the previous non-final Office Action of 11/07/2024 applies in here. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684. The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /ZAID MUHAMMAD SALEH/ Examiner, Art Unit 2668 01/27/2026 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Sep 23, 2022
Application Filed
Oct 31, 2024
Non-Final Rejection — §103
Jan 10, 2025
Response Filed
Mar 19, 2025
Final Rejection — §103
Jun 12, 2025
Response after Non-Final Action
Jul 11, 2025
Request for Continued Examination
Jul 14, 2025
Response after Non-Final Action
Aug 08, 2025
Non-Final Rejection — §103
Nov 12, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602944
AUTHENTICATION OF DENDRITIC STRUCTURES
2y 5m to grant Granted Apr 14, 2026
Patent 12586501
DISPLAY DEVICE, DISPLAY METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586396
INFORMATION PROCESSING APPARATUS AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12562535
METHOD FOR DETECTING UNDESIRED CONNECTION ON PRINTED CIRCUIT BOARD
2y 5m to grant Granted Feb 24, 2026
Patent 12555344
METHOD AND APPARATUS FOR IMPROVING VIDEO TARGET DETECTION PERFORMANCE IN SURVEILLANCE EDGE COMPUTING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+48.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month