Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim(s) 1-20 are pending for examination.
This Action is made NON-FINAL.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/05/2026 has been entered.
Response to Arguments
With regards to claim(s) 1-3, 6-11, 14-16, and 18-20 previously rejected under 35 U.S.C. 102 and claim(s) 4-5, 12-13, and 17 rejected under 35 U.S.C. 103, applicant presented the argument, applicant's arguments have been fully considered, but are deemed moot in view of new grounds of rejection necessitated by Applicant's amendment.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-11, 14-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Herman et al. (US 20200398797 A1, hereinafter known as Herman) in view of Deng et al. (US 20210241026 A1, hereinafter known as Deng).
Herman was cited in a previous office action.
Regarding claim 1, Herman teaches A sensor detection method applied to a sensor detection apparatus, the method comprising: obtaining data collected by a plurality of sensors;
{Para [0062-0063] “FIG. 6 is a diagram of an example process 600 for performing a sensor diagnostic. The process 600 begins in a block 605, in which a computer 105 actuates a time-of-flight sensor 200 to collect image data 115 around a vehicle 101. The computer 105 actuates a light source 210 of the time-of-flight sensor 200 and the camera 205 receives light reflected from objects around the vehicle 101, generating image data 115.
Next, in a block 610, the computer 105 actuates an image sensor 110 to collect second image data 115 around the vehicle 101. The image sensor 110 collects image data 115 of the same object that the time-of-flight sensor 200 collects. The image sensor 110 can be, e.g., a CMOS camera, a CCD camera, etc.”
}
extracting, by each of a plurality of encoders, a feature from data collected by one or more of the plurality of sensors, to obtain a plurality of pieces of feature data, wherein the plurality of pieces of feature data all have a consistent form
{ para [0050] “The computer 105 can localize the data 115 in a vehicle coordinate system, i.e., specify coordinates of objects indicated by the data 115 in the vehicle coordinate system. The vehicle coordinate system can be a conventional Cartesian coordinate system centered at an origin (e.g., a front center point of a bumper) having a first axis extending in a vehicle-forward longitudinal direction and a second axis extending in a vehicle-crosswise lateral direction, the second axis being perpendicular to the first axis. The data 115 from the time-of-flight sensor 200 and the image sensor 110 can be collected in a sensor coordinate system, i.e., a conventional Cartesian coordinate system having an origin at the respective sensor 110, 200. The computer 105, using conventional coordinate transformation techniques, can transform spatial coordinates of the data 115 in respective sensor coordinate systems to the vehicle coordinate system. By localizing the data 115 in the vehicle coordinate system, the computer 105 can fuse the data 115, i.e., convert the data 115 from the two sensors 110, 200 into a same coordinate system, from the time-of-flight sensor 200 and the image sensor 110 to generate the reflectivity map, 400, 500 and the depth map 405, 505.”
Para [0065] “In the block 620, the computer 105 localizes the image data 115 and the second image data 115 from the vehicle 101 in a vehicle coordinate system. As described above, the data 115 from the time-of-flight sensor 200 and the image sensor 110 can be collected in a sensor coordinate system, i.e., a conventional Cartesian coordinate system having an origin at the respective sensor 110, 200. The computer 105, using conventional coordinate transformation techniques, can transform spatial coordinates of the data 115 in respective sensor coordinate systems to the vehicle coordinate system. By localizing the data 115 in the vehicle coordinate system, the computer 105 can fuse the data 115 from the time-of-flight sensor 200 and the image sensor 110 to generate the reflectivity map, 400, 500 and the depth map 405, 505.”
Where calling the pixel or point cloud arrays to be used in the transformation can be considered as extracting features as a pixel or point in a point cloud can be considered a feature as its representative of a point in the environment.
An encoder at its most basic is merely a device that converts data, therefore the computer transforming the data can be considered as using an encoder.
Additionally the pieces of plurality of pieces of feature data can be said to have a consistent form as they are all spatial in nature. Their form can be said to be even more consistent in nature as they all share a vehicle coordinate system.
}
fusing the plurality of pieces of feature data to obtain fused data; and
{Para [0065] “In the block 620, the computer 105 localizes the image data 115 and the second image data 115 from the vehicle 101 in a vehicle coordinate system. As described above, the data 115 from the time-of-flight sensor 200 and the image sensor 110 can be collected in a sensor coordinate system, i.e., a conventional Cartesian coordinate system having an origin at the respective sensor 110, 200. The computer 105, using conventional coordinate transformation techniques, can transform spatial coordinates of the data 115 in respective sensor coordinate systems to the vehicle coordinate system. By localizing the data 115 in the vehicle coordinate system, the computer 105 can fuse the data 115 from the time-of-flight sensor 200 and the image sensor 110 to generate the reflectivity map, 400, 500 and the depth map 405, 505.”
}
performing inference on the fused data to obtain clean status information of each of the plurality of sensors.
{Para [0068-0071] “Next, in a block 635, the computer 105 determines a reflectivity difference between the light reflectivity map 400, 500 and a predicted light reflectivity map. The reflectivity difference is a measure of the reflected light that the time-of-flight sensor 200 does not receive because of occlusion from debris 300. That is, the predicted reflectivity map indicates the reflectivity that the time-of-flight sensor 200 should receive, and the light reflectivity difference indicates pixels that can be occluded by debris 300 on the time-of-flight sensor 200.
Next, in a block 640, the computer 105 determines a depth difference between the depth map 405, 505 and a predicted depth map. The depth difference is a measure of the depth not detected by the time-of-flight sensor 200 because of occlusion from debris 300. That is, the predicted depth map indicates the depth that the time-of-flight sensor 200 should detect, and the depth difference indicates pixels that can be occluded by debris 300 on the time-of-flight sensor 200.
Next, in a block 645, the computer 105 masks pixels of the reflectivity map 400, 500 and/or the depth map 405, 505 that are visible only to the time-of-flight sensor 200. That is, as described above, the field of view of the time-of-flight sensor 200 can differ from the field of view of the image sensor 110. For example, the time-of-flight sensor 200 can have a greater field of view and/or can include more pixels than the image sensor 110 and can view objects that the image sensor 110 cannot view. The computer 105 can identify pixels of the time-of-flight sensor 200 that do not have corresponding pixels of the image sensor 110 and can mask the identified pixels, i.e., hide the pixels from use in calculations such as object detection, sensor fusion, etc., as described above.
Next, in a block 650, the computer 105 determines an occlusion of the time-of-flight sensor 200, i.e., the amount of pixels obscured by debris or another object on the time-of-flight sensor 200. For example, the computer 105 can compare each pixel of the reflectivity map 400, 500 to a corresponding pixel of the predicted reflectivity map. If the pixel of the reflectivity map 400, 500 has a reflectivity below a reflectivity threshold and the corresponding pixel of the predicted reflectivity map is above the reflectivity threshold, the computer 105 can determine that the pixel is occluded. The reflectivity threshold can be determined based on a data 115 collection limit of the time-of-flight sensor 200, e.g., pixels having a reflectivity below the reflectivity threshold return a null value (e.g., NaN) for the reflectivity, and pixels having a reflectivity above the reflectivity threshold return a non-null value. That is, if the pixel in the reflectivity map 400, 500 returns a null value and the predicted reflectivity map does not return a null value for the corresponding pixel, the computer 105 can determine that the pixel in the reflectivity map 400, 500 is occluded by debris.”
}
and controlling a sensor cleaning apparatus to perform cleaning based on the clean status information.
{Para [0008] “A computer includes a processor and a memory, the memory storing instructions executable by the processor to collect first sensor data from a time-of-flight sensor, collect second sensor data from one or more sensors, generate a virtual map of at least one of a light reflectivity or a depth from the time-of-flight sensor from the collected first sensor data and the collected second sensor data, determine a difference between the light reflectivity or the depth of each pixel of the first sensor data from the light reflectivity or the depth of each corresponding pixel of the virtual map, determine an occlusion of the time-of-flight sensor as a number of pixels having respective differences of the light reflectivity or the depth exceeding an occlusion threshold; and actuate a component to clean the time-of-flight sensor when the occlusion exceeds an occlusion threshold.”
}
Herman does not teach, of a three-dimensional matrix with a same quantity of rows, columns and layers;
However, Deng teaches wherein the plurality of pieces of feature data all have a consistent form of a three-dimensional matrix with a same quantity of rows, columns and layers
{Para [0122] “The deep latent ensemble layer 944 combines the feature maps which produces merged or combined feature vectors. The deep latent ensemble layer 944 applies average or maximum pooling of the feature tensors (e.g., matrix) from the feature extractors 916, 924 and 928 (since they all have the same dimension W×L×(number of channels). The output from the deep latent ensemble layer 944 is provided to the DNN classifier and regressor 948. The DNN classifier regressor 948 are fully connected networks which helps in separating the data of the merged feature vectors into multiple categorical classes and continuous real values (e.g., an object's position and an object's dimension). For example, the DNN classifier regressor 928 outputs 920 the following parameters for detected objects: object classification (e.g., car, pedestrian, etc.); 3D position of the object (e.g., X,Y, Z coordinates of the object); 3D dimensions of the object (e.g., length, width, height of the object); the direction of the object (e.g., heading); and the velocity of the object.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Herman to incorporate the teachings of Deng because as discussed in para [0122] “(since they all have the same dimension W×L×(number of channels). The output from the deep latent ensemble layer 944 is provided to the DNN classifier and regressor 948. The DNN classifier regressor 948 are fully connected networks which helps in separating the data of the merged feature vectors into multiple categorical classes and continuous real values (e.g., an object's position and an object's dimension). For example, the DNN classifier regressor 928 outputs 920 the following parameters for detected objects: object classification (e.g., car, pedestrian, etc.); 3D position of the object (e.g., X,Y, Z coordinates of the object); 3D dimensions of the object (e.g., length, width, height of the object); the direction of the object (e.g., heading); and the velocity of the object.”
Regarding claim 2, Herman in view of Deng teaches The method according to claim 1. Herman teaches further comprising: determining, based on the clean status information of each of the plurality of sensors, that at least one sensor in the plurality of sensors needs to be cleaned; and
controlling a sensor cleaning apparatus to clean the at least one sensor.
{Para [0072-0073] “Next, in a block 655, the computer 105 determines whether the occlusion is above a predetermined threshold. For example, the computer 105 can divide the number of identified pixels by the total number of pixels in the reflectivity map 400, 500 to determine a ratio of identified pixels to total pixels. Based on the ratio, the computer 105 can determine the amount and type of obstruction causing the occlusion of the time-of-flight sensor 200. The threshold can be determined as a specific ratio beyond which the computer 105 determines that the time-of-flight sensor 200 requires cleaning. For example, the threshold can be 0.17%.
In the block 660, the computer 105 actuates a cleaning component 120 to remove the debris 300 causing the occlusion from the time-of-flight sensor 200. As described above, based on the ratio of identified occluded pixels, the computer 105 actuate a specific cleaning component 120 to remove the debris 300. For example, if the ratio indicates that the debris 300 is heavy dirt on the camera 205, the computer 105 can actuate a fluid sprayer to spray cleaning fluid onto the camera. In another example, if the ratio indicates that the debris 300 is dust on the light source 210, the computer 105 can actuate an air nozzle to blow air onto the light source 210.”
}
Regarding claim 3, Herman in view of Deng teaches The method according to claim 2. Herman teaches wherein clean status information of the at least one sensor comprises information indicating that an object is attached to a surface of each sensor in the at least one sensor and a type of the object, and the controlling the sensor cleaning apparatus to clean the at least one sensor comprises: cleaning each sensor in the at least one sensor based on the type of the object attached to the surface of each sensor in the at least one sensor.
{Para [0059-0060] “The computer 105 can determine the occlusion of the time-of-flight sensor 200 based on a number of pixels of the reflectivity map 400, 500 and/or the depth map 405, 505 that have a reflectivity below a reflectivity threshold or a depth below a depth threshold. For example, the computer 105 can identify the number of pixels of the reflectivity map 400, 500 that have values below the reflectivity threshold and for which corresponding pixels of the predicted reflectivity map are above the reflectivity map. The computer 105 can divide the number of identified pixels by the total number of pixels in the reflectivity map 400, 500 to determine a ratio of identified pixels to total pixels. Based on the ratio, the computer 105 can determine the amount and type of obstruction of the time-of-flight sensor 200. The type of obstruction can be, e.g., light debris, heavy debris, clean water, debris in water, debris on a light source 210, etc. For example, the computer 105 can include a lookup table such as Table 1 listing the type of occlusion based on the ratio of identified pixels to the total pixels:
The computer 105 can actuate a cleaning component 120 to clean the time-of-flight sensor 200 upon identifying the type and amount of obstruction causing the occlusion. For example, if the occlusion is heavy dirt on the camera 205, the computer 105 can actuate an air nozzle to blow air onto the camera 205 to remove the dirt. In another example, if the occlusion is water and dirt on the camera, the computer 105 can actuate a fluid nozzle to spray cleaning fluid onto the camera 205. In yet another example, if the occlusion is debris on the light sources 210, the computer 105 can actuate the air nozzle to blow air onto the light sources 210 to remove the debris.”
}
Regarding claim 6, Herman in view of Deng teaches The method according to claim 1. Herman teaches wherein the plurality of sensors comprise at least two of a camera apparatus, a lidar, a millimeter-wave radar, or an ultrasonic radar.
{Para [0040] “Sensors 110 can include a variety of devices. For example, various controllers in a vehicle 101 may operate as sensors 110 to provide data 115 via the vehicle 101 network or bus, e.g., data 115 relating to vehicle speed, acceleration, position, subsystem and/or component status, etc. Further, other sensors 110 could include cameras, motion detectors, etc., i.e., sensors 110 to provide data 115 for evaluating a position of a component, evaluating a slope of a roadway, etc. The sensors 110 could, without limitation, also include short range radar, long range radar, LIDAR, and/or ultrasonic transducers.”
}
Regarding claim 7, Herman in view of Deng teaches The method according to claim 1. Herman teaches wherein the plurality of pieces of feature data all are feature data in a first coordinate system.
{Para [0065] “In the block 620, the computer 105 localizes the image data 115 and the second image data 115 from the vehicle 101 in a vehicle coordinate system. As described above, the data 115 from the time-of-flight sensor 200 and the image sensor 110 can be collected in a sensor coordinate system, i.e., a conventional Cartesian coordinate system having an origin at the respective sensor 110, 200. The computer 105, using conventional coordinate transformation techniques, can transform spatial coordinates of the data 115 in respective sensor coordinate systems to the vehicle coordinate system. By localizing the data 115 in the vehicle coordinate system, the computer 105 can fuse the data 115 from the time-of-flight sensor 200 and the image sensor 110 to generate the reflectivity map, 400, 500 and the depth map 405, 505.”
}
Regarding claim 8, Herman in view of Deng teaches The method according to claim 7. Herman teaches wherein the first coordinate system is an image coordinate system or a bird eye view (BEV) coordinate system.
{Para [0050] “The computer 105 can localize the data 115 in a vehicle coordinate system, i.e., specify coordinates of objects indicated by the data 115 in the vehicle coordinate system. The vehicle coordinate system can be a conventional Cartesian coordinate system centered at an origin (e.g., a front center point of a bumper) having a first axis extending in a vehicle-forward longitudinal direction and a second axis extending in a vehicle-crosswise lateral direction, the second axis being perpendicular to the first axis. The data 115 from the time-of-flight sensor 200 and the image sensor 110 can be collected in a sensor coordinate system, i.e., a conventional Cartesian coordinate system having an origin at the respective sensor 110, 200. The computer 105, using conventional coordinate transformation techniques, can transform spatial coordinates of the data 115 in respective sensor coordinate systems to the vehicle coordinate system. By localizing the data 115 in the vehicle coordinate system, the computer 105 can fuse the data 115, i.e., convert the data 115 from the two sensors 110, 200 into a same coordinate system, from the time-of-flight sensor 200 and the image sensor 110 to generate the reflectivity map, 400, 500 and the depth map 405, 505.”
A cartesian coordinate system generated from image data can be considered as a image coordinate system
}
Regarding claim 9, it recites A sensor detection apparatus having limitations similar to those of claim 1 and therefore is rejected on the same basis.
Additionally Herman teaches A sensor detection apparatus, comprising: at least one processor: and a memory coupled to the at least one processor and storing programming instructions for execution by the at least one processor
{Abstract “A computer includes a processor and a memory, the memory storing instructions executable by the processor to collect first sensor data from a time-of-flight sensor, collect second sensor data from one or more sensors, generate a virtual map of at least one of a light reflectivity or a depth from the time-of-flight sensor from the collected first sensor data and the collected second sensor data, determine a difference between the light reflectivity or the depth of each pixel of the first sensor data from the light reflectivity or the depth of each corresponding pixel of the virtual map, determine an occlusion of the time-of-flight sensor as a number of pixels having respective differences of the light reflectivity or the depth exceeding an occlusion threshold, and actuate a component to clean the time-of-flight sensor when the occlusion exceeds an occlusion threshold.”
}
Regarding claim 10, it recites A sensor detection apparatus having limitations similar to those of claim 2 and therefore is rejected on the same basis.
Regarding claim 11, it recites A sensor detection apparatus having limitations similar to those of claim 3 and therefore is rejected on the same basis.
Regarding claim 14, it recites A sensor detection apparatus having limitations similar to those of claim 6 and therefore is rejected on the same basis.
Regarding claim 15, it recites A sensor detection apparatus having limitations similar to those of claim 7 and therefore is rejected on the same basis.
Regarding claim 16, it recites A sensor detection apparatus having limitations similar to those of claim 8 and therefore is rejected on the same basis.
Regarding claim 18, it recites A vehicle having limitations similar to those of claim 1 and therefore is rejected on the same basis.
Additionally Herman teaches A vehicle, comprising a sensor detection apparatus, wherein the sensor detection apparatus comprises: at least one processer; and a memory coupled to the at least one processor and storing programming instructions for execution by the at least one processor, wherein the programming instructions, upon execution by the at least one processor, instruct the at least one processor to perform the following operations:
{Abstract “A computer includes a processor and a memory, the memory storing instructions executable by the processor to collect first sensor data from a time-of-flight sensor, collect second sensor data from one or more sensors, generate a virtual map of at least one of a light reflectivity or a depth from the time-of-flight sensor from the collected first sensor data and the collected second sensor data, determine a difference between the light reflectivity or the depth of each pixel of the first sensor data from the light reflectivity or the depth of each corresponding pixel of the virtual map, determine an occlusion of the time-of-flight sensor as a number of pixels having respective differences of the light reflectivity or the depth exceeding an occlusion threshold, and actuate a component to clean the time-of-flight sensor when the occlusion exceeds an occlusion threshold.”
Para [0037] “FIG. 1 illustrates an example system 100 for performing a sensor diagnostic. The system 100 includes a computer 105. The computer 105, typically included in a vehicle 101, is programmed to receive collected data 115 from one or more sensors 110. For example, vehicle 101 data 115 may include a location of the vehicle 101, data about an environment around a vehicle 101, data about an object outside the vehicle such as another vehicle, etc. A vehicle 101 location is typically provided in a conventional form, e.g., geo-coordinates such as latitude and longitude coordinates obtained via a navigation system that uses the Global Positioning System (GPS). Further examples of data 115 can include measurements of vehicle 101 systems and components, e.g., a vehicle 101 velocity, a vehicle 101 trajectory, etc.”
}
Regarding claim 19, it recites A vehicle having limitations similar to those of claim 2 and therefore is rejected on the same basis.
Regarding claim 20, it recites A vehicle apparatus having limitations similar to those of claim 3 and therefore is rejected on the same basis.
Claim(s) 4-5 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Herman et al. (US 20200398797 A1, hereinafter known as Herman) in view of Deng et al. (US 20210241026 A1, hereinafter known as Deng) and Sakai et al. (US 20200406864 A1, hereinafter known as Sakai).
Sakai was cited in a previous office action.
Regarding Claim 4, Herman in view of Deng teaches The method according to claim 1. Herman teaches wherein the method is applied to a vehicle, and the method further comprises: determining, based on the clean status information of each of the plurality of sensors, a confidence level of the data collected by each of the plurality of sensors
{Para [0071] “Next, in a block 650, the computer 105 determines an occlusion of the time-of-flight sensor 200, i.e., the amount of pixels obscured by debris or another object on the time-of-flight sensor 200. For example, the computer 105 can compare each pixel of the reflectivity map 400, 500 to a corresponding pixel of the predicted reflectivity map. If the pixel of the reflectivity map 400, 500 has a reflectivity below a reflectivity threshold and the corresponding pixel of the predicted reflectivity map is above the reflectivity threshold, the computer 105 can determine that the pixel is occluded. The reflectivity threshold can be determined based on a data 115 collection limit of the time-of-flight sensor 200, e.g., pixels having a reflectivity below the reflectivity threshold return a null value (e.g., NaN) for the reflectivity, and pixels having a reflectivity above the reflectivity threshold return a non-null value. That is, if the pixel in the reflectivity map 400, 500 returns a null value and the predicted reflectivity map does not return a null value for the corresponding pixel, the computer 105 can determine that the pixel in the reflectivity map 400, 500 is occluded by debris.”
Where the determined amount of occlusion can be considered as a confidence level of the data
}
acting on based on the confidence level of the data collected by each of the plurality of sensors
{Para [0072] “Next, in a block 655, the computer 105 determines whether the occlusion is above a predetermined threshold. For example, the computer 105 can divide the number of identified pixels by the total number of pixels in the reflectivity map 400, 500 to determine a ratio of identified pixels to total pixels. Based on the ratio, the computer 105 can determine the amount and type of obstruction causing the occlusion of the time-of-flight sensor 200. The threshold can be determined as a specific ratio beyond which the computer 105 determines that the time-of-flight sensor 200 requires cleaning. For example, the threshold can be 0.17%.”
The cleaning occurs based on determining occlusion has reached a specific level.
}
Herman in view of Deng does not teach, and degrading an autonomous driving level based on the confidence level of the data collected by each of the plurality of sensors, or sending a first instruction to a first prompt apparatus wherein the first instruction instructs the first prompt apparatus to prompt a user to take over the vehicle.
However, Sakai teaches degrading an autonomous driving level based on the confidence level of the data collected by each of the plurality of sensors, or sending a first instruction to a first prompt apparatus wherein the first instruction instructs the first prompt apparatus to prompt a user to take over the vehicle.
{Para [0200] “According to the vehicle cleaner system 2100 according to the present embodiment, the vehicle control unit 3 can selectively execute the automatic driving mode and the manual driving mode, and the vehicle control unit 3 is configured to switch the driving mode from the automatic driving mode to the manual driving mode based on the non-cleaning signal received from the cleaner control unit 2116. As described above, in the case where the external sensor 6 is not in the clean state, the automatic driving mode is canceled, so that the information on the outside of the vehicle is not acquired even when the external sensor 6 is in a dirty state. Therefore, erroneous detection of the external sensor 6 can be prevented.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Herman in view of Deng to incorporate the teachings of Sakai to switch to manual driving if the sensor is not clean because in improves safety by preventing erroneous detection (Para [0200] “According to the vehicle cleaner system 2100 according to the present embodiment, the vehicle control unit 3 can selectively execute the automatic driving mode and the manual driving mode, and the vehicle control unit 3 is configured to switch the driving mode from the automatic driving mode to the manual driving mode based on the non-cleaning signal received from the cleaner control unit 2116. As described above, in the case where the external sensor 6 is not in the clean state, the automatic driving mode is canceled, so that the information on the outside of the vehicle is not acquired even when the external sensor 6 is in a dirty state. Therefore, erroneous detection of the external sensor 6 can be prevented.”)
Regarding Claim 5, Herman in view of Deng teaches The method according to claim 1
Herman in view of Deng does not teach, further comprises: sending a second instruction to a second prompt apparatus, wherein the second instruction instructs the second prompt apparatus to prompt a user with the clean status information of each of the plurality of sensors.
However, Sakai teaches further comprises: sending a second instruction to a second prompt apparatus, wherein the second instruction instructs the second prompt apparatus to prompt a user with the clean status information of each of the plurality of sensors.
{Para [0199] “According to the vehicle cleaner system 2100 according to the present embodiment, the vehicle control unit 3 is configured to cause the sensor state display unit 140 to display to indicate the external sensor 6 is not in the clean state based on the non-cleaning signal received from the cleaner control unit 2116. As described above, by displaying the clean state of the external sensor 6 on the sensor state display unit 140, the user of the vehicle 1 can be clearly notified of the clean state of the external sensor 6.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Herman to incorporate the teachings of Sakai to notify the user that the sensor is not clean because it allows the user to appropriate action such as cleaning the sensor themselves or driving more cautiously improving safety
Regarding claim 12, it recites A sensor detection apparatus having limitations similar to those of claim 4 and therefore is rejected on the same basis.
Regarding claim 13, it recites A sensor detection apparatus having limitations similar to those of claim 5 and therefore is rejected on the same basis.
Claim(s) 17 is rejected under 35 U.S.C. 103 as being unpatentable over Herman et al. (US 20200398797 A1, hereinafter known as Herman) in view of Deng et al. (US 20210241026 A1, hereinafter known as Deng) and Vitanov (US 20210245714 A1, hereinafter known as Vitanov).
Vitanov was cited in a previous office action.
Regarding Claim 17, Herman in view of Deng teaches The apparatus according to claim 9
Herman in view of Deng does not teach, wherein the apparatus is located in a cloud server.
However, Vitanov teaches wherein the apparatus is located in a cloud server.
{Para [0052] “Referring again to FIG. 4, at block 408 of the method 400, the window rotation determination engine 306 may determine a velocity of the vehicle 302 from vehicle velocity data included in the sensor data 304. It should be appreciated that velocity, as that term is used herein, may simply connote a speed of the vehicle 302 (e.g., a scalar quantity) or a speed and a direction of the vehicle 302 (e.g., a vector quantity). Then, at block 410 of the method 400, the window rotation determination engine 306 may determine whether the velocity of the vehicle 302 is less than a threshold velocity. If the vehicle velocity is determined to be less than the threshold vehicle velocity (a positive determination at block 410)—which may indicate that the vehicle velocity is insufficient to generate enough naturally circulating wind around the transparent surface 102 to produce a desired cleaning effect—a window rotation control engine 310 may send an actuation signal 310 to the rotating glass sensor assembly cleaning apparatus at block 412 to initiate the rotation of the transparent surface 102. The window rotation control engine 308 may execute on a computing device such as an embedded controller (e.g., the controller 114) provided locally in the vehicle 302 or on a remote server.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Herman in view of Deng to incorporate the teachings of Vitanov because it would be obvious to try. There are a finite number of general configurations of the processor and memory local or distributed. (Para [0052] “The window rotation control engine 308 may execute on a computing device such as an embedded controller (e.g., the controller 114) provided locally in the vehicle 302 or on a remote server.”)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER MATTA whose telephone number is (571)272-4296. The examiner can normally be reached Mon - Fri 10:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached at (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.G.M./Examiner, Art Unit 3668
/JAMES J LEE/Supervisory Patent Examiner, Art Unit 3668