Prosecution Insights
Last updated: April 19, 2026
Application No. 18/909,477

VIDEO RECORDING METHOD ACCORDING TO AN AUTONOMOUS DRIVING CONTROL AUTHORITY AND A SYSTEM FOR THE SAME

Non-Final OA §101§102§103
Filed
Oct 08, 2024
Examiner
PEKO, BRITTANY RENEE
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
97%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
130 granted / 157 resolved
+30.8% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
164
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 157 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: S107, which is referred to in [0077] in the disclosure, is absent in FIG. 4A. Rather, FIG. 4A refers to reference number S330 which is not described in the disclosure. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are further objected to because FIG.'s 4A and 4B . Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: line 3 of [0072] contains a spelling error "S102in" which should be corrected to "S102 in" Appropriate correction is required. Claim Objections Claim 11 is objected to because of the following informalities: the claim recites a spelling error; “surrounds (emphasis added) of the vehicle” should be amended to read “surroundings of the vehicle.” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 2, 4, 5, 9, and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The determination of whether a claim recites patent ineligible subject matter is a 2 step inquiry. STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), see MPEP 2106.03, or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: see MPEP 2106.04 STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? see MPEP 2106.04(II)(A)(1) STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? see MPEP 2106.04(II)(A)(2) STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? see MPEP 2106.05 101 Analysis – Step 1 Claim 1 is directed to a method (i.e., a process). Therefore, claim 1 is within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. see MPEP 2106(A)(II)(1) and MPEP 2106.04(a)-(c) Independent claim 1 includes limitations that recite an abstract idea (emphasized below [with the category of abstract idea in brackets]) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: A method of recording videos for a vehicle, the method comprising: determining, by a controller, that a preset function of an autonomous driving control is performed for avoiding a preset dangerous event [mental process/step]; and storing, by the controller, i) a video of surroundings of the vehicle and ii) data related to the preset function in association with the video in response to determining that the preset function is performed. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determining…” in the context of this claim encompasses a person (driver) viewing an action performed by a vehicle and forming a simple judgement; such as a vehicle performing an emergency braking action to prevent a collision between the vehicle and a nearby object. The claim does not go so far as to claim that the vehicle is actively being controlled by the controller to perform the preset function, therefore this limitation constitutes a mental process. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. see MPEP 2106.04(II)(A)(2) and MPEP 2106.04(d)(2). It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” [with a description of the additional limitations in brackets], while the bolded portions continue to represent the “abstract idea”.): A method of recording videos for a vehicle, the method comprising: determining, by a controller [applying the abstract idea using generic computing module], that a preset function of an autonomous driving control is performed for avoiding a preset dangerous event [mental process/step]; and storing, by the controller [applying the abstract idea using generic computing module], i) a video of surroundings of the vehicle and ii) data related to the preset function in association with the video in response to determining that the preset function is performed [insignificant post-solution activity (data gathering)]. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “storing… i) a video of surroundings of the vehicle…” the examiner submits that this limitation is insignificant extra-solution activity that merely use a computer (controller) to perform the process. In particular, the storing step is recited at a high level of generality (i.e., as a general means of gathering vehicle and road condition data) and amounts to mere data gathering, which is a form of insignificant extra-solution activity. The limitation “by a controller” is recited at a high level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception. see MPEP § 2106.05. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the Revised Guidance, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a controller to perform the determining… and storing… steps amounts to nothing more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. And, as discussed above, in regards to the additional limitations of storing…, the examiner submits that these limitations are insignificant extra-solution activities. In addition, these additional limitations (and the combination, thereof) amount to no more than what is well-understood, routine and conventional activity. Hence, the claim is not patent eligible. Dependent claims 2, 4, 5, 9, and 10 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2, 4, 5, 9, and 10 are not patent eligible under the same rationale as provided for in the rejection of claim 1. Therefore, claim(s) 1, 2, 4, 5, 9, and 10 is/are ineligible under 35 USC §101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 4 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kato et al. "Kato" (JP2014154135 A). Regarding claim 1, Kato teaches A method of recording videos for a vehicle see at least abstract, the method comprising: determining, by a controller, that a preset function of an autonomous driving control is performed for avoiding a preset dangerous event see at least [0001]-[0002] where a near-miss alarm signal is output from a collision prevention auxiliary device of a vehicle; and storing, by the controller, i) a video of surroundings of the vehicle and ii) data related to the preset function in association with the video in response to determining that the preset function is performed see at least [0002]-[0003] where a drive recorder receives the alarm signal from the collision prevention auxiliary device. In response to the alarm signal, a video file along with information such as the time and acceleration is created and stored. Regarding claim 4, Kato teaches The method of claim 1, wherein the preset function includes giving a driver a warning of the preset dangerous event see at least [0002]-[0003] where an alarm is output from the collision prevention auxiliary device to notify the driver of the near-miss event. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 11, 12 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kato in view of Nithiyanantham et al. " Nithiyanantham" (US 2020/0307616 A1). Regarding claim 2, Kato does not expressly disclose: The method of claim 1, further comprising setting, by the controller, at least one area to be scanned outside ​​the vehicle, wherein: determining that the preset dangerous event occurs comprises determining that the preset dangerous event occurs in the at least one area; and storing the video comprises storing a video of the at least one area in response to determining that the preset dangerous event has occurred. That is, Kato does not specifically disclose setting an area to be scanned outside the vehicle. Rather, Kato generally mentions that a collision prevention assist system uses radar and cameras to constantly monitor for the possibility of a collision while driving and, if a collision is predicted, sounding an alarm or taking control of the vehicle [0001] and creating a recording file of video data and other sensor data at the time of the warning [0002]-[0003]. However, setting at least one area to be scanned outside of the vehicle is a well-understood, routine and conventional aspect of vehicle controls especially in collision avoidance/warning systems. Nevertheless, Nithiyanantham teaches that it is known to provide: The method of claim 1, further comprising setting, by the controller, at least one area to be scanned outside ​​the vehicle, wherein: determining that the preset dangerous event occurs comprises determining that the preset dangerous event occurs in the at least one area; and storing the video comprises storing a video of the at least one area in response to determining that the preset dangerous event has occurred. See at least [0067] where a driver assistance system (DAS) is provided in a vehicle and configured to monitor the surrounding areas of the vehicle for nearby objects by a radar system, LIDAR system, etc. If an object is detected by the radar system, LIDAR system, etc., the DAS may determine that the vehicle is on a collision path with the detected object, issue an alert, and record video from the cameras. In this scenario, the area set to be scanned is the area surrounding the vehicle wherein at least one radar, LIDAR, or similar sensor is mounted to the vehicle and configured to scan for nearby objects. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Kato to incorporate the teachings of Nithiyanantham and provide the method of claim 1, further comprising setting, by the controller, at least one area to be scanned outside ​​the vehicle, wherein: determining that the preset dangerous event occurs comprises determining that the preset dangerous event occurs in the at least one area; and storing the video comprises storing a video of the at least one area in response to determining that the preset dangerous event has occurred. In doing so, this improves the overall safety of the system by, for example, providing an audio alert which includes a recommended action to take to avoid the collision [0068]. Regarding claim 11, Kato in view of Nithiyanantham teaches A system for recording videos for a vehicle see at least the abstract of Kato, the system comprising: two or more sensing sensor modules configured to set at least one area to be scanned outside ​​the vehicle see at least Nithiyanantham [0067] where a driver assistance system (DAS) is provided in a vehicle and configured to monitor the surrounding areas of the vehicle for nearby objects by a radar system, LIDAR system, etc. If an object is detected by the radar system, LIDAR system, etc., the DAS may determine that the vehicle is on a collision path with the detected object, issue an alert, and record video from the cameras. In this scenario, the area set to be scanned is the area surrounding the vehicle wherein at least one radar system or LIDAR system is mounted to the vehicle and configured to scan for nearby objects. Although Nithiyanantham does not explicitly disclose that there are two or more radar or LIDAR sensors as part of the radar or LIDAR system, the concept of providing two or more of these sensors as part of an autonomous vehicle sensing system is a well-understood, routine and conventional aspect of vehicle controls and this concept would have been obvious to a person having ordinary skill in the art. For example, see at least Luders et al. (US 2022/0128681 A1) the abstract and [0028] where an autonomous vehicle may include both a LIDAR and radar sensor configured to scan the environment of the vehicle. a camera configured to obtain a video of surrounds of the vehicle see at least Kato [0003]; a controller configured to determine that a preset function of an autonomous driving control is performed for avoiding a preset dangerous event see at least Kato [0001]-[0002] where a near-miss alarm signal is output from a collision prevention auxiliary device of a vehicle; and a storage unit configured to store the video obtained by the camera and data related to the preset function in association with the video under control of the controller see at least Kato [0002]-[0003] where a drive recorder receives the alarm signal from the collision prevention auxiliary device. In response to the alarm signal, a video file along with information such as the time and acceleration is created and stored. Regarding claim 12, Kato in view of Nithiyanantham teaches The system of claim 11, wherein the controller is configured to: determine that the preset dangerous event occurs in at least one area set to be scanned outside the vehicle; and store, in the storage unit, a video of the at least one area in the storage unit in response to determining that the preset dangerous event has occurred. See at least Nithiyanantham [0067] where a driver assistance system (DAS) is provided in a vehicle and configured to monitor the surrounding areas of the vehicle for nearby objects by a radar system, LIDAR system, etc. If an object is detected by the radar system, LIDAR system, etc., the DAS may determine that the vehicle is on a collision path with the detected object, issue an alert, and record video from the cameras. In this scenario, the area set to be scanned is the area surrounding the vehicle wherein at least one radar, LIDAR, or similar sensor is mounted to the vehicle and configured to scan for nearby objects. Regarding claim 14, Kato in view of Nithiyanantham teaches The system of claim 11, wherein the preset function includes giving a driver a warning of the preset dangerous event see at least Kato [0002]-[0003] where an alarm is output from the collision prevention auxiliary device to notify the driver of the near-miss event. Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kato in view of Eriksson et al. "Eriksson" (US 12,054,180 B2). Regarding claim 3, Kato does not explicitly disclose: The method of claim 1, wherein the controller is configured to control the vehicle in a full-driving-automation mode where the controller fully exercises control of the vehicle. However, Eriksson teaches that it is known to provide: The method of claim 1, wherein the controller is configured to control the vehicle in a full-driving-automation mode where the controller fully exercises control of the vehicle see at least Col. 5, lines 65-67 and Col. 6, lines 1-4 where the vehicle provided by Eriksson comprises an automated drive mode AD configured to control a driving of the vehicle autonomously. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Kato to incorporate the teachings of Eriksson and provide the method of claim 1, wherein the controller is configured to control the vehicle in a full-driving-automation mode where the controller fully exercises control of the vehicle. In doing so, this provides an improvement of enhancing convenience to a driver of the vehicle by enabling the driver to select their preferred drive mode of the vehicle which varies the level of support provided to the driver from the automated control system (Col. 5 lines 41-42). Regarding claim 13, Kato in view of Nithiyanantham and Eriksson teaches The system of claim 11, wherein controller is further configured to control the vehicle in a full-driving-automation mode where the controller fully exercises control of the vehicle see at least Col. 5, lines 65-67 and Col. 6, lines 1-4 where the vehicle provided by Eriksson comprises an automated drive mode AD configured to control a driving of the vehicle autonomously. Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kato in view of Matsuoka (US 12,472,870 B2). Regarding claim 5, Kato teaches The method of claim 4, wherein the data includes sensing results associated with the preset dangerous event Kato mentions that the drive recorder stores video footage along with information such as the time and acceleration sensor data associated with the preset dangerous event in an external memory; see at least [0003]. Kato teaches all of the elements of the current invention as stated above except wherein the data includes details of the warning. Nevertheless, storing data including details of the warning is a well-understood, routine and conventional practice in the art of vehicle controls and especially collision avoidance/warning systems and event data recorders. For example, Matsuoka teaches that it is known to provide: The method of claim 4, wherein the data includes details of the warning see at least the Abstract where an alerting system of a vehicle is provided which provides a notification to a driver of the vehicle and records details of the warning such as a reactive behavior to the notification. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Kato to incorporate the teachings of Joe and provide The method of claim 4, wherein the data includes details of the warning. In doing so, this provides an improvement of notifying the driver of a preset dangerous event in a form easily acceptable to a driver of the vehicle (Col. 1, lines 62-65). Regarding claim 15, Kato in view of Nithiyanantham and Matsuoka teaches The system of claim 14, wherein the data includes details of the warning see at least the Abstract where an alerting system of a vehicle is provided which provides a notification to a driver of the vehicle and records details of the warning such as a reactive behavior to the notification and sensing results associated with the preset dangerous event Kato mentions that the drive recorder stores video footage along with information such as the time and acceleration sensor data associated with the preset dangerous event in an external memory; see at least [0003]. Claim(s) 6-10 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kato in view of Joe et al. "Joe" (KR20210062530 A). Regarding claim 6, Kato does not expressly disclose The method of claim 1, further comprising: performing, by the controller, an intervening control to avoid the preset dangerous event while the vehicle is controlled in a driver driving mode; and storing, by the controller, i) a video of surroundings of the vehicle at the intervening control and ii) status information of the vehicle and data related to the intervening control in association with the video. Although, Kato discloses that a collision prevention assist system may take control of the vehicle if a preset dangerous event (e.g., a near-miss event) is predicted and further teaches recording video footage of about 15 seconds before and after the event along with other sensor data in response to the preset dangerous event. Nevertheless, Joe teaches that it is known to provide: The method of claim 1, further comprising: performing, by the controller, an intervening control to avoid the preset dangerous event while the vehicle is controlled in a driver driving mode see at least [0026] where the preset dangerous event may be a driver’s health risk state event. In response to detecting the preset dangerous event, the main controller takes action to transfer driving control to the autonomous vehicle; and storing, by the controller, i) a video of surroundings of the vehicle at the intervening control and ii) status information of the vehicle and data related to the intervening control in association with the video see at least [0026] where the driver’s health risk state event data is stored in the memory and also see at least [0062]-[0063] where the main controller stores collected data in a volatile and/or nonvolatile memory. The collected data includes data from vehicle sensors such as cameras (both exterior and interior to the vehicle), lidars, etc. used for autonomous driving event logging ([0003] & [0010]). Since the vehicle sensor data is continuously collected via the volatile memory this teaches storing the video of the surroundings of the vehicle at the intervening control, status information of the vehicle and data related to the intervening control in association with the video. Further, Kato discloses recording video data during a detected event and Joe discloses storing autonomous driving sensor data therefore Kato may be further modified by Joe in order to record video data during an event which is captured during the intervening control. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Kato to incorporate the teachings of Joe and provide: The method of claim 1, further comprising: performing, by the controller, an intervening control to avoid the preset dangerous event while the vehicle is controlled in a driver driving mode; and storing, by the controller, i) a video of surroundings of the vehicle at the intervening control and ii) status information of the vehicle and data related to the intervening control in association with the video. In doing so, this provides an improvement of effectively storing and utilizing various events occurring in autonomous vehicles, reducing the risk of data loss, and extending the lifespan of the storage device thereby saving maintenance costs and further contributing to the popularization of autonomous vehicles [0043]. Regarding claim 7, Kato in view of Joe teaches The method of claim 6, wherein the data related to the intervening control includes information on having given the driver a warning of the preset dangerous event see at least Joe [0062]-[0063] where the main controller stores collected data in a volatile and/or nonvolatile memory. The collected data includes data from vehicle sensors such as cameras (both exterior and interior to the vehicle), lidars, etc. used for autonomous driving event logging ([0003] & [0010]). Since the vehicle sensor data is continuously collected via the volatile memory this teaches storing data related to the intervening control. Regarding claim 8, Kato in view of Joe teaches The method of claim 6, wherein the status information includes one or more of i) information obtained by at least one light detection and ranging (LIDAR) module see at least Joe [0003] & [0010] where the autonomous driving sensing device may be a LIDAR sensor, ii) information obtained by at least one radar, or iii) information obtained by at least one ultrasonic sensor. Regarding claim 9, Kato in view of Joe teaches The method of claim 1, further comprising storing, by the controller, details of the autonomous driving control and sensor data while the vehicle is controlled in an autonomous driving mode see at least Joe [0024] where the controller may store data on the obstacle ahead and data indicating that the autonomous vehicle is in a state in which it is difficult to drive in the non-volatile memory and a video of surroundings of the vehicle captured during the autonomous driving mode see at least Joe [0062]-[0063] where the main controller stores collected data in a volatile and/or nonvolatile memory. The collected data includes data from vehicle sensors such as cameras (both exterior and interior to the vehicle), lidars, etc. used for autonomous driving event logging ([0003] & [0010]). Since the vehicle sensor data is continuously collected via the volatile memory this teaches. Further, Kato discloses recording video data during a detected event therefore Kato may be further modified by Joe in order to record video data during an event which is captured during the autonomous driving mode. Regarding claim 10, Kato in view of Joe teaches The method of claim 1, wherein the preset function includes warning a driver of the preset dangerous event while the vehicle is controlled in an autonomous driving mode see at least Joe [0024] where the controller may provide a transition guidance (e.g., warning) so that the driver can take control of the driving when it is determined that an obstacle that is detected around the vehicle makes it difficult for the autonomous vehicle to drive. Regarding claim 16, Kato in view of Nithiyanantham and Joe teaches The system of claim 11, wherein the controller is further configured to: perform an intervening control to avoid the preset dangerous event while the vehicle is controlled in a driver driving mode see at least Joe [0026] where the preset dangerous event may be a driver’s health risk state event. In response to detecting the preset dangerous event, the main controller takes action to transfer driving control to the autonomous vehicle; and store, in the storage unit, i) a video of surroundings of the vehicle at the intervening control and ii) status information of the vehicle and data related to the intervening control in association with the video see at least Joe [0026] where the driver’s health risk state event data is stored in the memory and also see at least [0062]-[0063] where the main controller stores collected data in a volatile and/or nonvolatile memory. The collected data includes data from vehicle sensors such as cameras (both exterior and interior to the vehicle), lidars, etc. used for autonomous driving event logging ([0003] & [0010]). Since the vehicle sensor data is continuously collected via the volatile memory this teaches storing the video of the surroundings of the vehicle at the intervening control, status information of the vehicle and data related to the intervening control in association with the video. Further, Kato discloses recording video data during a detected event and Joe discloses storing autonomous driving sensor data therefore Kato may be further modified by Joe in order to record video data during an event which is captured during the intervening control. Regarding claim 17, Kato in view of Nithiyanantham and Joe teaches The method of claim 16, wherein the data related to the intervening control includes information on having given the driver a warning of the preset dangerous event see at least Joe [0062]-[0063] where the main controller stores collected data in a volatile and/or nonvolatile memory. The collected data includes data from vehicle sensors such as cameras (both exterior and interior to the vehicle), lidars, etc. used for autonomous driving event logging ([0003] & [0010]). Since the vehicle sensor data is continuously collected via the volatile memory this teaches storing data related to the intervening control. Regarding claim 18, Kato in view of Nithiyanantham and Joe teaches The method of claim 16, wherein the status information includes one or more of i) information obtained by at least one light detection and ranging (LIDAR) module see at least Joe [0003] & [0010] where the autonomous driving sensing device may be a LIDAR sensor, ii) information obtained by at least one radar, or iii) information obtained by at least one ultrasonic sensor. Regarding claim 19, Kato in view of Nithiyanantham and Joe teaches The method of claim 11, further comprising storing, by the controller, details of the autonomous driving control and sensor data while the vehicle is controlled in an autonomous driving mode see at least Joe [0024] where the controller may store data on the obstacle ahead and data indicating that the autonomous vehicle is in a state in which it is difficult to drive in the non-volatile memory and a video of surroundings of the vehicle captured during the autonomous driving mode see at least Joe [0062]-[0063] where the main controller stores collected data in a volatile and/or nonvolatile memory. The collected data includes data from vehicle sensors such as cameras (both exterior and interior to the vehicle), lidars, etc. used for autonomous driving event logging ([0003] & [0010]). Since the vehicle sensor data is continuously collected via the volatile memory this teaches. Further, Kato discloses recording video data during a detected event therefore Kato may be further modified by Joe in order to record video data during an event which is captured during the autonomous driving mode. Regarding claim 20, Kato in view of Nithiyanantham and Joe teaches The method of claim 11, wherein the preset function includes warning a driver of the preset dangerous event while the vehicle is controlled in an autonomous driving mode see at least Joe [0024] where the controller may provide a transition guidance (e.g., warning) so that the driver can take control of the driving when it is determined that an obstacle that is detected around the vehicle makes it difficult for the autonomous vehicle to drive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cho et al. (KR20150106154 A) discloses a vehicle control apparatus and method which predicts an unstable state of a vehicle and records an image around a vehicle in response to an event occurring around a vehicle. Morita et al. (US 11,995,926 B2) discloses a recording control device configured to detect an event related to the vehicle and start recording imaging data in response to the detection of the event. Lu et al. (US 2024/0166240 A1) discloses computer-based management of accident prevention in autonomous vehicles. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brittany Renee Peko whose telephone number is (408)918-7506. The examiner can normally be reached Monday - Thursday 8:30-6:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.R.P./12/22/2025Examiner, Art Unit 3665 /RUSSELL FREJD/Primary Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Oct 08, 2024
Application Filed
Dec 22, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589747
VEHICLE CONTROL SYSTEMS FOR AUTOMATED VEHICLE PLATOON DRIVING
2y 5m to grant Granted Mar 31, 2026
Patent 12583436
HYBRID ELECTRIC VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12580242
BATTERY TEMPERATURE CONTROL APPARATUS AND METHOD FOR ELECTRIC VEHICLES
2y 5m to grant Granted Mar 17, 2026
Patent 12576736
BATTERY ELECTRIC VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12576858
INTELLIGENT SETTINGS OF ONBOARD SENSORS ON A VEHICLE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
97%
With Interview (+14.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 157 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month