DETAILED ACTION
This is a Final Office Action on the Merits in response to communications filed by applicant on January 21st, 2026. Claims 1-5 are currently pending and examined below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments to the Claims, filed on January 21st, 2026, have been entered. Claims 1 and 4 are currently amended, and pending, and claims 2-3 and 5 are original, unamended, and pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0043254 A1 ("Hase") in view of US 11978266 B2 ("Arar") in further view of US 2020/0033147 A1 ("Ahn") in further view of NPL Data Compression ("Wikipedia").
Regarding claim 1, Hase teaches an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle (Hase: Abstract, “A data storage device is mounted on a vehicle on which an autonomous driving control device performs an autonomous driving control.”, ¶ 0048, “First, a schematic configuration of a vehicle on which a data storage device according to the present embodiment is mounted will be described. As shown in FIG. 1, a vehicle 10 according to the present embodiment includes, as control devices for performing various controls of the vehicle, an engine electronic control unit (ECU) 20, an electronic control brake system 30, an electric power steering system 40, an airbag ECU 50, an in-vehicle ECU 60, an autonomous driving ECU 70, and the like. These ECUs are each configured mainly using a microcomputer including a central processing unit (CPU), a read-only memory (ROM), a random-access memory (RAM), and the like. Furthermore, these ECUs are connected via an in vehicle network 80 so as to be able to communicate with one another. In the following description, the electronic control brake system 30 is abbreviated as "ECB 30", and the electric power steering system 40 is abbreviated as "EPS 40".”),
the autonomous driving system comprising processing circuitry and one or more storage devices (Hase: Figure 2 processor 91, regular storage medium 92, and saving storage medium 93, ¶ 0069, “The data storage device 90 includes a processor 91, a regular storage medium 92, one or more saving storage media 93, and a detection circuit 97.”), wherein
when the autonomous driving is performed, the processing circuitry is configured to execute: acquiring log data related to the autonomous driving (Hase: ¶ 0070, “The detection circuit 97 receives the output signals of the travel information sensor 73, the switch 74, the ignition switch 75, the accessory switch 76, the voltage sensor 77, and the like, and transmits the received signals to the processor 91.”, ¶ 0095, “As shown in FIG. 3, first, in the process in step Sl0, the controller 911 causes the regular storage medium 92 to store the output value of each of the sensors 73, 77 and the switches 74 to 76 and the information acquired from each of the ECUs 20 to 70.”. ¶ 0096, “The information acquired by the controller 911 from each of the ECUs 20 to 70 can be roughly classified as information related to autonomous driving shown in FIG. 4, information related to manual driving shown in FIG. 5, and management information shown in FIG 6. As shown in FIG. 4, the autonomous driving-related information includes "basis information for autonomous driving control" and a "control amount of autonomous driving control". The "control amount of autonomous driving control" includes, for example, a control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. The "basis information for autonomous driving control" includes information providing grounds for the control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. In the "basis information for autonomous driving control", not only the state of the vehicle, but also the state of a person are recorded.”. The cited passages clearly state that the system is configured to acquire log information regarding the autonomous control of the vehicle.);
storing the log data in the one or more storage devices (Hase: ¶ 0093, “The controller 911 causes the regular storage medium 92 to regularly store the output value of each of the sensors 73, 77 and the switches 74 to 76 and the information acquired from each of the ECUs 20 to 70. Furthermore, when the abnormality detector 910 detects an abnormal state, the controller 911 copies data stored in the regular storage medium 92 to each of one or more saving storage media 93.”), and
storing the log data includes compressing or deleting target data that is a part of the log data (Hase: ¶ 0093, “The controller 911 causes the regular storage medium 92 to regularly store the output value of each of the sensors 73, 77 and the switches 74 to 76 and the information acquired from each of the ECUs 20 to 70. Furthermore, when the abnormality detector 910 detects an abnormal state, the controller 911 copies data stored in the regular storage medium 92 to each of one or more saving storage media 93.”, ¶ 0119, “(1) When any abnormality occurs in the vehicle 10, the determination information such as those shown in FIGS. 4 to 7 is stored into the saving storage media 93. Therefore, whether the subject driving the vehicle upon the occurrence of the abnormality is a person or the autonomous driving ECU 70 can be analyzed by analyzing the determination information stored in the saving storage media 93.”, ¶ 0158, “When the data usage of the regular storage medium 92 reaches the upper limit of storage capacity, the controller 911 deletes the data in chronological order.”. The cited passages show that when an abnormal state is detected, the information obtained and stored in the regular storage medium 92 is copied to the saving storage medium 93 and then after the storage limit is reached for regular storage medium 92, the data is deleted in chronological order.” One of ordinary skill in the art would see that only the data corresponding to an abnormal state (i.e. not corresponding to the ideal state) is saved to the saving storage medium and therefore only the data corresponding to the ideal state is permanently deleted. The data corresponding to the ideal state is permanently deleted as it is not copied to the saving storage medium.).
Hase does not teach an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model,
determining whether an operator of the vehicle responds to an operation request from the autonomous driving system;
storing the log data includes compressing or deleting target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system.
Arar, in the same field of endeavor, teaches an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model (Arar: Column 5 lines 29-49, “The sensor data 102A may be used by a body tracker 104 and/or an eye tracker 106 to determine gestures, postures, activities, eye movements (e.g., saccade velocity, smooth pursuits, gaze locations, directions, or vectors, pupil size, blink rate, road scan range and distribution, etc.), and/or other information about an occupant-e.g., a driver----of the vehicle 500. This information may then be used by an attentiveness determiner 108 to determine an attentiveness of an occupant(s), a cognitive load determiner 110 to determine a cognitive load of the occupant(s), and/or a field of view (FOY) projector 116 to compare-via a comparator 120-with external perception outputs 114 from one or more deep neural networks (DNNs) 112. Information representative of the attentiveness, cognitive load, and/or the outputs of the comparator 120 may be analyzed by a state machine 122 to determine a state of the occupant(s), and the state may be used by an action determiner 124 to determine one or more actions or operations to execute ( e.g., to issue a visual, audible, and/or tactile notification, suppress a notification, engage an ADAS system, take over autonomous control of the vehicle 500, etc.).”, Column 10 lines 5-28, “Although examples are described herein with respect to using the DNNs(s) 112 (and/or to using DNNs, computer vision algorithms, image processing algorithms, machine learning models, etc., with respect to the body tracker 104, the eye tracker 106, the attentiveness determiner 108, and/or the cognitive load determiner 110), this is not intended to be limiting. For example and without limitation, the DNN(s) 112 and/or the computer vision algorithms, image processing algorithms, machine learning models, etc. described herein with respect to the body tracker 104, the eye tracker 106, the attentiveness determiner 108, and/or the cognitive load determiner 110, may include any type of machine learning model or algorithm, such as a machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naive Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, long/short term memory/LSTM, Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), areas of interest detection algorithms, computer vision algorithms, and/or other types of algorithms or machine learning models.”. The cited passages clearly show that machine learning is used in the control of the autonomous vehicle. The vehicle is configured with several machine learning algorithms that determine various aspects of the driver based on sensor data and uses the outputs of the machine learning algorithms in order to determine an action for the vehicle to take.).
Hase teaches an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle, the autonomous driving system comprising processing circuitry and one or more storage devices, wherein when the autonomous driving is performed, the processing circuitry is configured to execute: acquiring log data related to the autonomous driving; acquiring a current state of an operator of the vehicle; acquiring an ideal state that is a state of the operator required by the autonomous driving system; and storing the log data in the one or more storage devices, and storing the log data includes compressing or deleting target data that is a part of the log data when the current state of the operator matches the ideal state. Hase does not teach an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model. Arar teaches an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model. A person of ordinary skill in the art would have had the technological capabilities required to have modified the system taught in Hase with an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model taught in Arar. Furthermore, the system taught in Hase is already configured to perform autonomous control of a vehicle based on sensor input, so modifying the system to use machine learning algorithms as taught in Arar would only require the implementation of one of the known machine learning algorithms taught in Arar. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable of an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle taught in Hase with an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model taught in Arar with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Hase in view of Arar does not teach determining whether an operator of the vehicle responds to an operation request from the autonomous driving system;
storing the log data includes compressing or deleting target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system.
Ahn, in the same field of endeavor, teaches determining whether an operator of the vehicle responds to an operation request from the autonomous driving system (Ahn: ¶ 0295, “As shown in FIG. 20, if the alternative paths provided from the server 503 and the driving mode corresponding to each path are displayed, the first device waits until the user selects the path and the driving mode (S611). In this case, if the user selects the alternative path and the alternative driving mode within a predetermined waiting time (for example, a waiting time of one minute) and inputs these to the first device (S612), the first device transmits the alternative path and the alternative driving mode selected by the user to the server 503 (S613), the server 503 stores the information on the alternative path and the alternative driving mode selected by the user in the database 5034 in log (S614).”, ¶ 0296, “However, when the predetermined waiting time (for example, the waiting time of one minute) elapses without the input of the user to the first device (S615), the first device may output a warning sound or an alarm message through the output unit (S616). After the warning sound or the alarm message is output, if the user selects the alternative path and the alternative driving mode and inputs the selected these to the first device (S617), the first device transmits the information on the alternative path and the alternative driving mode selected by the user to the server 503 (S613), and the server 503 stores the information on the alternative path and the alternative driving mode selected by the user in the database 5034 in log (S614).”, ¶ 0297, “However, after the first device outputs the warning sound or the alarm message through the output unit (S616), when the user does not select the alternative path and the alternative driving mode, the first device self-selects the alternative path providing the first communication environment satisfying the performance requirements based on the 3GPP 22.816 and selects the alternative driving mode corresponding to the alternative path (S618). Thereafter, the first device controls the first vehicle 510 to move to the destination in the alternative driving mode (S619). In this case, the alternative driving mode may be an autonomous driving type and may include the autonomous driving mode, the remote driving mode, and the cluster driving mode.”. The cited passages clearly teach that the system is configured to determine if a user responds to alternative path and driving mode request.);
storing the log data includes target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system (Ahn: ¶ 0296, “However, when the predetermined waiting time (for example, the waiting time of one minute) elapses without the input of the user to the first device (S615), the first device may output a warning sound or an alarm message through the output unit (S616). After the warning sound or the alarm message is output, if the user selects the alternative path and the alternative driving mode and inputs the selected these to the first device (S617), the first device transmits the information on the alternative path and the alternative driving mode selected by the user to the server 503 (S613), and the server 503 stores the information on the alternative path and the alternative driving mode selected by the user in the database 5034 in log (S614).”. The system is clearly configured to store log data when a user responds to an operation request by the vehicle.).
Hase in view of Arar teaches an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model, the autonomous driving system comprising processing circuitry and one or more storage devices, wherein when the autonomous driving is performed, the processing circuitry is configured to execute: acquiring log data related to the autonomous driving; storing the log data in the one or more storage devices, and storing the log data includes compressing or deleting target data that is a part of the log data. Hase in view of Arar does not teach storing the log data includes target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system. Ahn teaches storing the log data includes target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system. A person of ordinary skill in the art would have had the technological capabilities required to have modified the system taught in Hase in view of Arar with storing the log data includes target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system taught in Ahn. Furthermore, the system taught in Hase in view of Arar is already configured to store or delete data based on a plurality of conditions. One of ordinary skill in the art would have had the technological capabilities required to have modified the system taught Hase in view of Arar with the condition for storing data being whether or not a user interacts with a operation request by the vehicle as taught in Ahn according to known methods. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle by using a machine learning model, the autonomous driving system comprising processing circuitry and one or more storage devices, wherein when the autonomous driving is performed, the processing circuitry is configured to execute: storing the log data includes target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the system taught in Hase in view of Arar with storing the log data includes target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system taught in Ahn with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Hase in view of Arar in further view of Ahn does not teach storing the log data includes compressing or deleting target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system.
Wikipedia teaches storing the log data includes compressing or deleting target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system (Wikipedia: Entire document. The cited NPL document discuss data compression and methods of data compression, as well as advantages and disadvantages to each method and data compression as a whole.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the system taught in Hase in view of Arar in further view of Ahn with storing the log data includes compressing or deleting target data that is a part of the log data when the operator of the vehicle responds to the operation request from the autonomous driving system taught in Wikipedia with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because compressing data reduces the resources required to store and transmit said data (Wikipedia: Introduction ¶0003, “Compression is useful because it reduces the resources required to store and transmit data. Computational resources are consumed in the compression and decompression processes. Data compression is subject to a space-time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient or require additional storage. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources required to compress and decompress the data.”).
Regarding claim 2, Hase in view of Arar in further view of Ahn in further view of Wikipedia teaches wherein the log data includes system data related to behavior of the autonomous driving system (¶ 0096, “The information acquired by the controller 911 from each of the ECUs 20 to 70 can be roughly classified as information related to autonomous driving shown in FIG. 4, information related to manual driving shown in FIG. 5, and management information shown in FIG 6. As shown in FIG. 4, the autonomous driving-related information includes "basis information for autonomous driving control" and a "control amount of autonomous driving control". The "control amount of autonomous driving control" includes, for example, a control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. The "basis information for autonomous driving control" includes information providing grounds for the control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. In the "basis information for autonomous driving control", not only the state of the vehicle, but also the state of a person are recorded.”. The data being stored clearly includes information regarding the behavior of the autonomous driving system.)
and operator data related to a state of the operator (Hase: ¶ 0072, “The abnormality detector 910 detects an abnormal state on the basis of the information acquired from each of the ECUs 20, 30, 40, 50, 60, 70 via the in-vehicle network 80. Specifically, the ECUs 20 70 individually monitor to corresponding control systems for any abnormality. In response to a request from the abnormality detector 910, each of the ECUs 20 70 reports the abnormality detection to result of the control system on each occasion to the abnormality detector 910. The abnormality detector 910 detects an abnormal state on the basis of the abnormality detection result transmitted from each of the ECUs 20 70, the output to values of the sensors 73, 77 and the switches 74 76, and to so on. An abnormal state detected by the abnormality detector 910 includes, for example, an abnormality in the vehicle 10, an abnormality in an occupant, and an abnormality in the environment surrounding the vehicle 10.”, ¶ 0074, “Specifically, the abnormality detector 910 detects an abnormality in an occupant on the basis of the state of the occupant detected by the in-vehicle ECU 60 using the occupant monitor sensor 62.”, ¶ 0096, “The information acquired by the controller 911 from each of the ECUs 20 to 70 can be roughly classified as information related to autonomous driving shown in FIG. 4, information related to manual driving shown in FIG. 5, and management information shown in FIG 6. As shown in FIG. 4, the autonomous driving-related information includes "basis information for autonomous driving control" and a "control amount of autonomous driving control". The "control amount of autonomous driving control" includes, for example, a control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. The "basis information for autonomous driving control" includes information providing grounds for the control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. In the "basis information for autonomous driving control", not only the state of the vehicle, but also the state of a person are recorded.”. The data being stored clearly includes the state of the operator.).
Regarding claim 3, Hase in view of Arar in further view of Ahn in further view of Wikipedia teaches wherein the target data includes the operator data (Hase: ¶ 0074, “Specifically, the abnormality detector 910 detects an abnormality in an occupant on the basis of the state of the occupant detected by the in-vehicle ECU 60 using the occupant monitor sensor 62.”, ¶ 0093, “The controller 911 causes the regular storage medium 92 to regularly store the output value of each of the sensors 73, 77 and the switches 74 to 76 and the information acquired from each of the ECUs 20 to 70. Furthermore, when the abnormality detector 910 detects an abnormal state, the controller 911 copies data stored in the regular storage medium 92 to each of one or more saving storage media 93.”, ¶ 0096, “The information acquired by the controller 911 from each of the ECUs 20 to 70 can be roughly classified as information related to autonomous driving shown in FIG. 4, information related to manual driving shown in FIG. 5, and management information shown in FIG 6. As shown in FIG. 4, the autonomous driving-related information includes "basis information for autonomous driving control" and a "control amount of autonomous driving control". The "control amount of autonomous driving control" includes, for example, a control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. The "basis information for autonomous driving control" includes information providing grounds for the control amount transmitted from the autonomous driving ECU 70 to each of the ECUs 20 to 60. In the "basis information for autonomous driving control", not only the state of the vehicle, but also the state of a person are recorded.”, ¶ 0158, “When the data usage of the regular storage medium 92 reaches the upper limit of storage capacity, the controller 911 deletes the data in chronological order.”. The cited passages show that when an abnormal state is detected, the information obtained and stored in the regular storage medium 92 is copied to the saving storage medium 93 and then after the storage limit is reached for regular storage medium 92, the data is deleted in chronological order.”. The cited passages clearly show that the data regarding the state of the operator is stored in the regular storage medium and is deleted once the regular storage medium reaches its upper capacity limit. The data regarding the state of the operator is only copied, and therefore saved, to the saving storage medium when an abnormal state is detected (i.e. the operator state is not the ideal state). Therefore, the state of the operator is clearly a component of the target data.).
Regarding claim 4, Hase in view of Arar in further view of Ahn in further view of Wikipedia teaches wherein the processing circuitry is further configured to execute setting log storage priority to each piece of data included in the log data (Hase: ¶ 0225, “In the case of leaving the entire determination information in the saving storage medium 93, if the vehicle 10 continues traveling, the data usage of the storage media 92, 93 reaches the upper limit of storage capacity, and thus the data stored in the storage media 92, 93 need to be deleted.”, ¶ 0231, “The controller 911 may set data as deletable in the aforementioned deletion timing, and when the data usage of the storage media 92, 93 reaches the upper limit of storage capacity, delete the data in order of predetermined priorities. The order of predetermined priorities is determined as indicated in (hl) to (h5) below, for example.”, ¶ 0234, “(h3) Setting the order of priorities according to triggers leading to recording.”, ¶ 0237, “Examples of the trigger indicated in the above (h3) include events listed in (il) to (i6) below.”, ¶ 0242, “(i5) Occurrence of an abnormality in a driver. For example, the case where the driver is unconscious, intoxicated, or does not wear the seatbelt.”. The cited passages clearly teach setting storage priority to the data collected by the system.), and
storing the log data includes setting data of which the log storage priority is lower as the target data prior to data of which the log storage priority is higher when the operator of the vehicle responds to the operation request from the autonomous driving system (Hase: ¶ 0225, “In the case of leaving the entire determination information in the saving storage medium 93, if the vehicle 10 continues traveling, the data usage of the storage media 92, 93 reaches the upper limit of storage capacity, and thus the data stored in the storage media 92, 93 need to be deleted. Thus, the controller 911 according to the present embodiment deletes a part or the whole of the data stored in the storage media 92, 93, at deletion timing listed in (g1) to (g5) below.”, ¶ 0226, “(gl) Upon the lapse of a predetermined length of time.”, ¶ 0227, “(g2) At a point when the data usage exceeds a predetermined threshold.”, ¶ 0228, “At a point when an occupant, a dealer, or the like makes a delete instruction.” ¶ 0231, “The controller 911 may set data as deletable in the aforementioned deletion timing, and when the data usage of the storage media 92, 93 reaches the upper limit of storage capacity, delete the data in order of predetermined priorities. The order of predetermined priorities is determined as indicated in (hl) to (h5) below, for example.”, ¶ 0234, “(h3) Setting the order of priorities according to triggers leading to recording.”, ¶ 0237, “Examples of the trigger indicated in the above (h3) include events listed in (il) to (i6) below.”, ¶ 0242, “(i5) Occurrence of an abnormality in a driver. For example, the case where the driver is unconscious, intoxicated, or does not wear the seatbelt.”, Ahn: ¶ 0296, “However, when the predetermined waiting time (for example, the waiting time of one minute) elapses without the input of the user to the first device (S615), the first device may output a warning sound or an alarm message through the output unit (S616). After the warning sound or the alarm message is output, if the user selects the alternative path and the alternative driving mode and inputs the selected these to the first device (S617), the first device transmits the information on the alternative path and the alternative driving mode selected by the user to the server 503 (S613), and the server 503 stores the information on the alternative path and the alternative driving mode selected by the user in the database 5034 in log (S614).”. The cited passage clearly teach that the priority of deletion for the data is set lower when an abnormality in the driver state is detected (i.e. when the driver state does is not the ideal state). One of ordinary skill in the art would see that this clearly means that driver state data that does not indicate an abnormality (i.e. the driver state is the ideal state) has a higher deletion priority than driver state data that does indicate an abnormality (i.e. driver state data that is not the ideal state). This means that the data that matches the ideal state is set as the target data and deleted before data that does not match the ideal state. Han clearly teaches that the log data is stored when a user responds to an operation request by the vehicle. Therefore the combination of Hase in view of Arar in further view of Ahn in further view of Wikipedia teaches the limitations of claim 4.).
Regarding claim 5, Hase in view of Arar in further view of Ahn in further view of Wikipedia teaches wherein the log data includes recognition data acquired by recognizing surroundings of the vehicle (Hase: ¶ 0056, “The surrounding recognition sensor 71 detects an object present in a predetermined range that is set around the vehicle 10, such as a predetermined range in front of the vehicle 10 and a predetermined range behind the vehicle 10, and outputs, to the autonomous driving ECU 70, a signal corresponding to the detected object. The surrounding recognition sensor 71 includes, for example, a camera and a lidar device. On the basis of the output signal of the surrounding recognition sensor 71, the autonomous driving ECU 70 detects an object present around the vehicle 10.”, ¶ 0072, “The abnormality detector 910 detects an abnormal state on the basis of the information acquired from each of the ECUs 20, 30, 40, 50, 60, 70 via the in-vehicle network 80. Specifically, the ECUs 20 70 individually monitor to corresponding control systems for any abnormality. In response to a request from the abnormality detector 910, each of the ECUs 20 70 reports the abnormality detection to result of the control system on each occasion to the abnormality detector 910. The abnormality detector 910 detects an abnormal state on the basis of the abnormality detection result transmitted from each of the ECUs 20 70, the output to values of the sensors 73, 77 and the switches 74 76, and to so on. An abnormal state detected by the abnormality detector 910 includes, for example, an abnormality in the vehicle 10, an abnormality in an occupant, and an abnormality in the environment surrounding the vehicle 10.”, ¶ 0106, “Furthermore, the controller 911 causes the regular storage medium 92 to store at least one of the following (β1) to (β3) as the above information (β)”, ¶ 0108, “(β2) An "own vehicle, another vehicle, and the surrounding situation" as the basis information for autonomous driving control.”. The cited passages clearly show that the log data includes a recognition result of the surroundings of the vehicle.), and
setting the log storage priority includes setting the log storage priority of the recognition data regarding a range monitored by the operator to be lower than the log storage priority of the recognition data regarding a range not monitored by the operator (Hase: ¶ 0225, “In the case of leaving the entire determination information in the saving storage medium 93, if the vehicle 10 continues traveling, the data usage of the storage media 92, 93 reaches the upper limit of storage capacity, and thus the data stored in the storage media 92, 93 need to be deleted.”, ¶ 0231, “The controller 911 may set data as deletable in the aforementioned deletion timing, and when the data usage of the storage media 92, 93 reaches the upper limit of storage capacity, delete the data in order of predetermined priorities. The order of predetermined priorities is determined as indicated in (hl) to (h5) below, for example.”, ¶ 0234, “(h3) Setting the order of priorities according to triggers leading to recording.”, ¶ 0237, “Examples of the trigger indicated in the above (h3) include events listed in (il) to (i6) below.”, ¶ 0242, “(i5) Occurrence of an abnormality in a driver. For example, the case where the driver is unconscious, intoxicated, or does not wear the seatbelt.”, See ¶ 0238-0241, 0243 for additional deletion priorities. Arar: Column 5 lines 29-49, “Information representative of the attentiveness, cognitive load, and/or the outputs of the comparator 120 may be analyzed by a state machine 122 to determine a state of the occupant(s), and the state may be used by an action determiner 124 to determine one or more actions or operations to execute ( e.g., to issue a visual, audible, and/or tactile notification, suppress a notification, engage an ADAS system, take over autonomous control of the vehicle 500, etc.).”, Column 6 lines 1-33, “The eye tracker 106 may use the sensor data 102A-e.g., sensor data from one or more in-cabin cameras, NIR cameras or sensors, and/or other eye-tracking sensor types-to determine gaze directions and movements, fixations, road scanning behaviors (e.g., road scanning patterns, distribution, and range), saccade information ( e.g., velocity, direction, etc.), blink rate, smooth pursuit information ( e.g., velocity, direction, etc.), and/or other information. The eye tracker 106 may determine time periods corresponding to certain states, such as how long a fixation lasts, and/or may track how many times certain states are determined-e.g., how many fixations, how many saccades, how many smooth pursuits, etc. The eye tracker 106 may monitor or analyze each eye individually, and/or may monitor or analyze both eyes together. For example, both eyes may be monitored in order to use triangulation for measuring a depth of an occupant's gaze. In some embodiments, the eye tracker 106 may execute one or more machine learning algorithms, deep neural networks, computer vi son algorithms, image processing algorithms, mathematical algorithms, and/or the like to determine the eye tracking information.”, Column 6 line 34 – Column 7 line 10, “The attentiveness determiner 108 may be used to deter- mine the attentiveness of the occupant(s ). For example, the outputs from the body tracker 104 and/ or the eye tracker 106 may be processed or analyzed by the attentiveness determiner 108 to generate an attentiveness value, score, or level. The attentiveness determiner 108 may execute one or more machine learning algorithms, deep neural networks, computer vison algorithms, image processing algorithms, mathematical algorithms, and/or the like to determine the attentiveness. For example, with respect to FIGS. 2A-2E, FIG. 2A includes a graph 202 corresponding to a current (e.g., corresponding to a current time or a period of time-such as a second, three seconds, five seconds, etc.) gaze direction and gaze information. For example, the gaze direction may be represented by points 212, where the (x, y) locations in the graph 202 may have corresponding locations with respect to the vehicle 500. The graph 202 may be used to determine eye movement types---e.g., a most recent eye movement type-such as a saccade, as referenced in FIG. 2A, or a smooth pursuit, fixation, etc. As a further example, with respect to FIG. 2B, the points 212 from the graph 202 may be reflected in graph 204, which may reflect current and/or recent (e.g., within last second, three seconds, etc.) gaze regions of the occupant(s). For example, any number of gaze regions 214 (e.g., gaze regions 214A-214F) may be used to determine road scanning behaviors, fixations, and/or other information that may be used by the attentiveness determiner 108. As non-limiting examples, the gaze regions 214 may include a left side gaze region 214A (e.g., corresponding to a driver side window, a driver side mirror, etc.), a left front gaze region 214B (e.g., corresponding to a left half or portion of a front windshield), a right front gaze region 214C ( e.g., corresponding to a right half or portion of a front windshield), a right side gaze region 214D (e.g., corresponding to a passenger side window, a passenger side mirror, etc.), an instrument cluster gaze region 214E (e.g., corresponding to an instrument cluster 532 or instrument panel behind, below, and/or above the steering wheel), and/or a center console gaze region 214F (e.g., corresponding to control-bearing surfaces, displays, touch screen interfaces, radio controls, air conditioning controls, hazard light controls, in-vehicle infotainment (IVI), in-car entertainment (ICE), and/or other center console features).”).
Hase teaches an autonomous driving system mounted on a vehicle and configured to perform autonomous driving of the vehicle that sets the priority of deletion of log data based on various criteria. Arar teaches a system for determining where a driver of the vehicle is looking and how attentive they are being (i.e. whether they are monitoring/paying attention to the region they are looking at), and then uses this information in the control of the autonomous vehicle. A person of ordinary skill in the art would have had the technological capabilities required to have modified the system taught in Hase with the ability to set the deletion priority of data based on area being monitored by the driver using the methods taught in Arar. The system taught in Hase is already configured to monitor the operator of a vehicle and determine their state, as well as setting the deletion priority of log data based on the operator state. A person of ordinary skill in the art would have been able to modify the system taught in Hase to include the area being monitored by the driver as part of the driver state using the methods taught in Arar. Furthermore, modifying the deletion priority to include the area being monitored by the driver would only require a simple modification of the deletion priority regarding the operator state already taught in Hase. These modifications would not have changed or introduced new functionality. No inventive effort would have been required. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, that the combination of Hase in view of Arar in further view of Ahn in further view of Wikipedia teaches the limitations of claim 5.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noah W Stiebritz whose telephone number is (571)272-3414. The examiner can normally be reached Monday thru Friday 7-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.W.S./Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658