Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to the amendment filed on 9/11/2025. Claims 18 and 19 have been cancelled. Claims 33-35 have been added. Claims 1-17 and 20-35 are currently pending.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 10-12, 14-17, 21-23, 26-30, and 32-35 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20190325595 A1 hereinafter Stein.
Regarding claim 1, Stein teaches a method comprising (a method for vehicle environment modeling with a camera paragraph [0030]): obtaining one or more vehicle motion profile applied to a portion of one or more vehicles traversing a first road segment, (FIGS. 7-8, data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076]), wherein the one or more vehicle motion profiles are obtained based at least partially on measurements of motion with one or more sensors disposed on a portion of the one or more vehicles; and (The system 208 includes processing circuitry to perform vehicle environment modeling via images obtained from the camera 202. The vehicle environment modeling may include modeling the road surface 206, obstacles, obstructions, and moving bodies paragraph [0044]).;
Training a statistical model using the one or more vehicle motion profiles(FIGS. 7-8 illustrates an example of a DNN training system, according to an embodiment. Here, a multi-modal loss function application engine 950 is configured to supply training data 930 as input to DNN Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example. paragraph [0076]), the trained statistical model being configured when trained to identify or classify one or more road features associated with a second road segment (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076]) based at least in part on the vehicle motion profiles applied to a portion of a second vehicle traversing a second road segment; (The system 208 is arranged to provide a data set to an artificial neural network (ANN) to produce a gamma image paragraph [0045], the system is used to accurately estimate the planar (or bi-quadratic) model of the road surface, and compute small deviations from the planar (or bi-quadratic) road surface model to detect or quantify various surface features 104 paragraph [0190])
Regarding claim 2, Stein discloses the method of claim 33, further comprising:
associating the one or more road features of the second road segment with one or more geographical locations; and (the target location and size are inputted as images. Target location includes two gradient images in which pixel values represent a distance from the center of the target. Here, a horizontal gradient image 1106 (e.g., position x or P.sub.x) and a vertical gradient image 1108 (e.g., position y or P.sub.y) make up the target location paragraph [0097])
storing, in non-volatile computer readable memory, the one or more geographical locations of the one or more road features of the second road segment. (Registers of the processor 2702, the main memory 2704, the static memory 2706, or the mass storage 2708 may be, or include, a machine readable medium 2722 on which is stored one or more sets of data structures or instructions 2724 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein [paragraph [0242], Specific examples of non-transitory machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks paragraph [0243])
The examiner notes in this instance that the geographical locations are data structures.
Regarding claim 3, Stein discloses the method of claim 2, further comprising generating a map based on the one or more geographical locations. (Each layer produces a feature map, which is in turn passed to the subsequent layer for further processing along forward propagation path 508. As depicted, the operations of convolutional network portion 502 operate to progressively reduce resolution of the feature maps, while increasing the number of channels (dimensionality) of the feature maps along convolutional forward propagation path 508A. The operations of deconvolutional network portion 504 operate to progressively increase resolution of the feature maps, while decreasing their dimensionality along deconvolutional forward propagation path 508B paragraph [0067], Operation of the DNN in inference mode produces a road structure map such as a gamma map as described above. Paragraph [0108])
Regarding claim 10, Stein discloses the method of claim 33, wherein obtaining the vehicle motion profile caused by the one or more road features of the second road segment comprises traversing the second road segment with the one or more vehicles while measuring the vertical motion of the portion of the one or more vehicles using one or more motion sensors disposed in the one or more vehicles. (The system 208 is arranged to model the road surface 206 using the gamma image. In an example, modeling the road surface includes computing a vertical deviation from the plane of a road surface feature paragraph [0054])
Regarding claim 11, Stein discloses the method of claim 10, wherein the portion of the one or more vehicles includes a wheel of the one or more vehicles. (In another an example, where the suspension state of the vehicle is available, suspension information is considered together with the ego-motion to more accurately measure the vertical motion of the vehicle's wheel paragraph [0090])
Regarding claim 12, Stein discloses the method of claim 10, wherein the vehicle motion profile caused by the one or more road features of the second road segment is measured as a function of time. (To perform the modeling, the system 208 is arranged to obtain a time-ordered sequence of images representative of the road surface 206 paragraph [0044])
Regarding claim 14, Stein discloses the method of claim 33, wherein the trained statistical model is a first trained statistical model (The system 208 is arranged to provide a data set to an artificial neural network (ANN) to produce a gamma image paragraph [0045]), further comprising:
training a second trained statistical model using the vehicle motion profile caused by the one or more road features of the first road segment, (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076], invoking a second ANN using the three-dimensional structure to determine whether the features represent an object moving or not moving within an environment of the road surface paragraph [0294]) wherein the second trained statistical model is configured to identify or classify one or more road feature characteristics based at least in part on the vehicle motion profile caused by the one or more road features of the second road segment; and (wherein the ANN and the second ANN are implemented as a single ANN trained to produce a two-channel output, wherein a first channel is the three-dimensional structure of the scene and the second channel is the three-dimensional structure produced by the second ANN that used using more photogrammetric loss in its training paragraph [0293])
using the second trained statistical model to classify or identify one or more road features of the second road segment based at least partly on measurements of a vehicle motion profile caused by the one or more road features of the second road segment. (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076] The system 208 is arranged to provide a data set to an artificial neural network (ANN) to produce a gamma image paragraph [0045], the system is used to accurately estimate the planar (or bi-quadratic) model of the road surface, and compute small deviations from the planar (or bi-quadratic) road surface model to detect or quantify various surface features 104 paragraph [0190])
Regarding claim 15, Stein discloses the method of claim 14, wherein to classify or identify one or more road features of the second road segment includes identifying a road feature type or road feature characteristics. (invoking a second ANN using the three-dimensional structure to determine whether the features represent an object moving or not moving within an environment of the road surface paragraph [0294])
Regarding claim 16, Stein discloses the method of claim 15, wherein the road feature type includes one selected from a group of a speed bump, a pothole, a manhole cover, a storm grate, a frost heave, and an expansion joint. (Using systems and methods provided herein, surface features such as bumps or holes, speed bumps, curbs, or manhole covers, may be measured or modeled as vertical deviations from the road surface (e.g., plane) with sub-pixel accuracy (e.g., on the order of 1-2 centimeters paragraph [0191])
Regarding claim 17, Stein discloses the method of claim 15, wherein the one or more road feature characteristics include a size of the one or more road features. (Here, a horizontal gradient image 1106 (e.g., position x or P.sub.x) and a vertical gradient image 1108 (e.g., position y or P.sub.y) make up the target location input to the neural network 1112. These images include an outline of the target to illustrate the gradient's relationship to the target. The target size is represented here as an image in which all of the pixels have the same value (e.g., a constant value image) representative of the target's size paragraph [0097])
Regarding claim 21, Stein discloses a method comprising:
obtaining first vehicle motion profiles applied to a portion of one or more vehicles traversing a first road segment associated with one or more road features, wherein the first vehicle motion profiles are obtained based at least partially on measurements of motion with one or more sensors disposed on a portion of the one or more vehicles traversing the first road segment; (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076])
obtaining second vehicle motion profiles applied to a portion of one or more vehicles traversing a second road segment associated with an absence of the one or more road features, wherein the second vehicle motion profiles are obtained based at least partially on measurements of motion with one or more sensors disposed on a portion of the one or more vehicles traversing the second road segment; (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076])
generating a trained statistical model using the first vehicle motion profiles and the second vehicle motion profiles; and (FIGS. 7-8 illustrates an example of a DNN training system, according to an embodiment. Here, a multi-modal loss function application engine 950 is configured to supply training data 930 as input to DNN Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example. paragraph [0076])
storing in non-volatile computer readable memory, the trained statistical model. (Registers of the processor 2702, the main memory 2704, the static memory 2706, or the mass storage 2708 may be, or include, a machine readable medium 2722 on which is stored one or more sets of data structures or instructions 2724 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein [paragraph [0242], Specific examples of non-transitory machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks paragraph [0243])
The examiner notes that the trained statistical model in this instance is software.
Regarding claim 22, Stein discloses the method of claim 21, wherein the trained statistical model is a first trained statistical model, wherein the method further comprises: (The system 208 is arranged to provide a data set to an artificial neural network (ANN) to produce a gamma image paragraph [0045])
obtaining third vehicle motion profiles applied to a portion of the one or more vehicles traversing a first type of road feature; (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076])
obtaining road feature characteristic data associated with the third vehicle motion profiles; (using the three-dimensional structure to determine whether the features represent an object moving or not moving within an environment of the road surface paragraph [0294])
generating a second trained statistical model using the third vehicle motion profiles and the road feature characteristic data; and (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076], invoking a second ANN using the three-dimensional structure to determine whether the features represent an object moving or not moving within an environment of the road surface paragraph [0294])
storing in the non-volatile computer readable memory, the second trained statistical model. (Registers of the processor 2702, the main memory 2704, the static memory 2706, or the mass storage 2708 may be, or include, a machine readable medium 2722 on which is stored one or more sets of data structures or instructions 2724 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein [paragraph [0242], Specific examples of non-transitory machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks paragraph [0243])
The examiner notes that the trained statistical model in this instance is software.
Regarding claim 23, Stein discloses the method of claim 22, wherein the first type of road feature includes one selected from a group of a speed bump, a pothole, a manhole cover, a storm grate, a frost heave, and an expansion joint. (Using systems and methods provided herein, surface features such as bumps or holes, speed bumps, curbs, or manhole covers, may be measured or modeled as vertical deviations from the road surface (e.g., plane) with sub-pixel accuracy (e.g., on the order of 1-2 centimeters paragraph [0191])
Regarding claim 26, Stein discloses the method of claim 21, wherein obtaining the first vehicle motion profile comprises:
traversing, in the one or more vehicles, a first plurality of road segments, wherein each road segment of the first plurality of road segments includes the one or more road features; and (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076])
while traversing each road segment of the first plurality of road segments, measuring vehicle motion of the portion of the one or more vehicles. (In an example, the ego-motion may be provided by an ego-motion sensor and processing engine. This type of engine uses robust tracking of points on the road and the points above the road using an essential matrix. It also combines any inertial sensors and speedometer information available paragraph [0142])
Regarding claim 27, Stein discloses the method of claim 21, wherein obtaining the second vehicle motion profile comprises:
traversing, in the one or more vehicles, a second plurality of road segments, wherein each road segment of the second plurality of road segments does not include the one or more road features; and (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076])
while traversing each road segment of the second plurality of road segments, measuring vehicle motion of the portion of the one or more vehicles. (In an example, the ego-motion may be provided by an ego-motion sensor and processing engine. This type of engine uses robust tracking of points on the road and the points above the road using an essential matrix. It also combines any inertial sensors and speedometer information available paragraph [0142])
Regarding claim 28, Stein discloses the method of claim 26, wherein the measured vehicle motion includes a vertical motion of the portion of the one or more vehicles. (The system 208 is arranged to model the road surface 206 using the gamma image. In an example, modeling the road surface includes computing a vertical deviation from the plane of a road surface feature paragraph [0054])
Regarding claim 29, Stein discloses the method of claim 26, wherein the measured vehicle motion includes a longitudinal motion of the one or more vehicles. (The epipole is a vector that represents the direction of forward motion. In an example, image-formatted epipole location data 2578 includes a pair of images, each image having a resolution that is the same or similar to image frames A, B, and C paragraph [0229])
Regarding claim 30, Stein discloses the method of claim 27, wherein the portion of the one or more vehicles includes a wheel. (In another an example, where the suspension state of the vehicle is available, suspension information is considered together with the ego-motion to more accurately measure the vertical motion of the vehicle's wheel paragraph [0090])
Regarding claim 32, Stein discloses at least one non-transitory computer-readable storage medium storing programming instructions that, when executed by at least one processor, causes the at least one processor to perform the method of any one of claim 1. (In an example, the hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a machine readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. Paragraph [0239], Non-limiting machine readable medium examples may include solid-state memories, optical media, magnetic media, and signals (e.g., radio frequency signals, other photon based signals, sound signals, etc.). In an example, a non-transitory machine readable medium comprises a machine readable medium with a plurality of particles having invariant (e.g., rest) mass, and thus are compositions of matter. Accordingly, non-transitory machine-readable media are machine readable media that do not include transitory propagating signals. Specific examples of non-transitory machine readable media may include: non-volatile memory paragraph [0243], FIG. 27)
Regarding claim 33, Stein discloses the method of claim 1. Stein additionally discloses further comprising using the trained statistical model to identify or classify one or more road features (FIGS. 7-8, Training data 830 may include various sequences of image frames captured by one or more vehicle-mounted cameras. The image frames may include video footage captured on various roads, in various geographic locales, under various lighting and weather conditions, for example paragraph [0076]) of the second road segment based at least in part on measurements of a vehicle motion profile applied to a portion of a vehicle traversing the second road segment. (The system 208 is arranged to provide a data set to an artificial neural network (ANN) to produce a gamma image paragraph [0045], the system is used to accurately estimate the planar (or bi-quadratic) model of the road surface, and compute small deviations from the planar (or bi-quadratic) road surface model to detect or quantify various surface features 104 paragraph [0190])
Regarding claim 34, Stein discloses the method of claim 1. Stein additionally discloses wherein the trained statistical model is configured, when trained, to identify and classify one or more road features associated with the second road segment based at least in part on the motion profiles. (The system 208 is arranged to provide a data set to an artificial neural network (ANN) to produce a gamma image paragraph [0045], the system is used to accurately estimate the planar (or bi-quadratic) model of the road surface, and compute small deviations from the planar (or bi-quadratic) road surface model to detect or quantify various surface features 104 paragraph [0190])
Regarding claim 35, Stein discloses the method of claim 1. Stein additionally discloses wherein the first road segment and the second road segment are one road segment. (At operation 1501, a sequence of image frames (e.g., a first image frame A, a second image frame B, and a third image frame C) of the same portion of a road in field of view of a camera are captured. Image points of the road in first image frame A are matched at operation 1502 to corresponding image points of the road in the second image frame B. Likewise, image points of the road in the second image frame B are matched at operation 1502 to corresponding image points of the road in the third image frame C. Paragraph [0122])
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4-8, 13 20, 24, 25, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Stein in view of US 20190324443 A1 hereinafter Cella.
Regarding claim 4, Stein discloses the method as recited in claim 33. However, Stein doesn’t explicitly disclose filtering the vehicle motion profile caused by by the one or more road features of the second road segment to attenuate one or more vehicle-specific characteristics prior to inputting the vehicle motion profile caused by the one or more road features of the second road segment to the trained statistical model.
Cella discloses filtering the vehicle motion profile caused by by the one or more road features of the second road segment to attenuate one or more vehicle-specific characteristics prior to inputting the vehicle motion profile caused by the one or more road features of the second road segment to the trained statistical model. (a network control circuit 11710 for sending and receiving information related to the sensor inputs to an external system and a data filter circuit configured to dynamically adjust what portion of the information is sent based on instructions received over the network communication interface paragraph [1268])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the filtering circuit of Cella to the vehicle motion profile of Stein prior to the inputting of the profile to the trained statistical model. This combination would enable the trained statistical model to generate more accurate road features by leveraging high-confidence input data.
Regarding claim 5, Stein and Cella disclose the method as recited in claim 4. However, Stein doesn’t explicitly disclose where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises filtering a first frequency of the vehicle motion profile caused by the one or more road features of the second road segment to reduce artifacts of wheel-hop from the vehicle motion profile caused by the one or more road features of the second road segment.
Cella discloses where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises filtering a first frequency of the vehicle motion profile caused by the one or more road features of the second road segment to reduce artifacts of wheel-hop from the vehicle motion profile caused by the one or more road features of the second road segment. (a band pass filter circuit 8532 which may be used to separate out signals occurring at different frequencies paragraph [0658], In embodiments, the sensor inputs additionally comprise microphones or vibration sensors configured to detect vibrational or audio-frequency conditions in movable or rotational components, such as whirring, howling, growling, whining, rumbling, clunking, rattling, wheel hopping, and chattering paragraph [1310])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the filtering frequency of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to reduce the noise / inaccurate sensor data that would be obtained via wheel hopping thus making the data more accurate.
Regarding claim 6, Stein and Cella disclose the method as recited in claim 5. However, Stein doesn’t explicitly disclose where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises applying a notch filter to the vehicle motion profile caused by the one or more road features of the second road segment, wherein a stop-band frequency range of the notch filter includes a frequency of the wheel-hop.
Cella discloses where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises applying a notch filter to the vehicle motion profile caused by the one or more road features of the second road segment, wherein a stop-band frequency range of the notch filter includes a frequency of the wheel-hop. (Additionally, or alternatively, a band pass filter circuit 8532 includes one or more notch filters or other filtering mechanism to narrow ranges of frequencies (e.g., frequencies from a known source of noise). This may be used to filter out dominant frequency signals such as the overall rotation, and may help enable the evaluation of low amplitude signals at frequencies associated with torsion, bearing failure and the like paragraph 0658, In embodiments, the sensor inputs additionally comprise microphones or vibration sensors configured to detect vibrational or audio-frequency conditions in movable or rotational components, such as whirring, howling, growling, whining, rumbling, clunking, rattling, wheel hopping, and chattering paragraph [1310])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the notch filter of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to reduce the noise / inaccurate sensor data that would be obtained via wheel hopping thus making the data more accurate.
Regarding claim 7, Stein and Cella disclose the method as recited in claim 5. However, Stein doesn’t explicitly disclose where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises: applying a low-pass filter to the vehicle motion profile caused by the one or more road features of the second road segment, wherein a cutoff frequency of the low-pass filter is less than a frequency of the wheel-hop.
Cella discloses where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises: applying a low-pass filter to the vehicle motion profile caused by the one or more road features of the second road segment, wherein a cutoff frequency of the low-pass filter is less than a frequency of the wheel-hop. (An example band pass filter circuit 8532 includes any filtering operations understood in the art, including at least a low-pass filter paragraph [0658], In embodiments, the sensor inputs additionally comprise microphones or vibration sensors configured to detect vibrational or audio-frequency conditions in movable or rotational components, such as whirring, howling, growling, whining, rumbling, clunking, rattling, wheel hopping, and chattering paragraph [1310])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the low pass filter of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to reduce the noise / inaccurate sensor data that would be obtained via wheel hopping thus making the data more accurate.
Regarding claim 8, Stein and Cella disclose the method as recited in claim 5. However, Stein doesn’t explicitly disclose where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises applying a high-pass filter to the vehicle motion profile caused by the one or more features of the second road segment, wherein a cutoff frequency of the high-pass filter is above a frequency of the wheel-hop.
Cella discloses where filtering the vehicle motion profile caused by the one or more road features of the second road segment comprises applying a high-pass filter to the vehicle motion profile caused by the one or more features of the second road segment, wherein a cutoff frequency of the high-pass filter is above a frequency of the wheel-hop. (An example band pass filter circuit 8532 includes any filtering operations understood in the art, including at least a low-pass filter, a high-pass filter, and/or a band pass filter—for example to exclude or reduce frequencies that are not of interest for a particular determination paragraph [0658], In embodiments, the sensor inputs additionally comprise microphones or vibration sensors configured to detect vibrational or audio-frequency conditions in movable or rotational components, such as whirring, howling, growling, whining, rumbling, clunking, rattling, wheel hopping, and chattering paragraph [1310])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the low pass filter of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to reduce the noise / inaccurate sensor data that would be obtained via wheel hopping thus making the data more accurate.
Regarding claim 13, Stein discloses the method as described in claim 12. However, Stein does not disclose transforming the vehicle motion profile caused by the one or more road features of the second road segment from a time domain into a distance domain prior to inputting the vehicle motion profile caused by the one or more road features of the second road segment to the trained statistical model.
Cella discloses transforming the vehicle motion profile caused by the one or more road features of the second road segment from a time domain into a distance domain prior to inputting the vehicle motion profile caused by the one or more road features of the second road segment to the trained statistical model.
(The signal evaluation circuit 9208 may process the detection values to obtain information about a bearing being monitored. The frequency transformation circuit 9212 may transform one or more time-based detection values to frequency information. The transformation may be accomplished using techniques such as a digital Fast Fourier transform (“FFT”), Laplace transform, Z-transform, wavelet transform, other frequency domain transform, or other digital or analog signal analysis techniques, including, without limitation, complex analysis, including complex phase evolution analysis. [0780])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the transformation technique from a time domain into the distance domain of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to interpret data in terms of physical space rather than just frequency components which would increase the localization of features.
Regarding claim 20, Stein discloses the method as described in claim 33. However, Stein does not disclose wherein the one or more road features correspond to one or more clusters identified in a training data set.
Cella discloses wherein the one or more road features correspond to one or more clusters identified in a training data set. (For example, some of the analysis techniques used in unsupervised learning may include K-means clustering, Gaussian mixture models, Hidden Markov models, and the like. The algorithms used in supervised and unsupervised learning methods of pattern recognition enable the use of pattern recognition in various high precision applications paragraph [0340])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the clustering technique of Cella to the statistical model of Stein. This would enable the system of Stein to improve road feature classification accuracy by automatically segmenting different road types.
Regarding claim 24, Stein discloses the method as described in claim 21. However, Stein does not disclose transforming the first vehicle motion profiles and the second vehicle motion profiles into a frequency domain prior to generating the trained statistical model.
Cella discloses transforming the first vehicle motion profiles and the second vehicle motion profiles into a frequency domain prior to generating the trained statistical model. (The frequency transformation circuit 9212 may transform one or more time-based detection values to frequency information paragraph [0780])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the frequency domain change of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to cherry pick data from signals at frequencies of a specific interest thus increasing the overall efficiency of the system.
Regarding claim 25, Stein discloses the method as described in claim 21. However, Stein does not disclose transforming the first vehicle motion profiles and the second vehicle motion profiles from a time domain into a distance domain prior to generating the trained statistical model.
Cella discloses transforming the first vehicle motion profiles and the second vehicle motion profiles from a time domain into a distance domain prior to generating the trained statistical model. (The signal evaluation circuit 9208 may process the detection values to obtain information about a bearing being monitored. The frequency transformation circuit 9212 may transform one or more time-based detection values to frequency information. The transformation may be accomplished using techniques such as a digital Fast Fourier transform (“FFT”), Laplace transform, Z-transform, wavelet transform, other frequency domain transform, or other digital or analog signal analysis techniques, including, without limitation, complex analysis, including complex phase evolution analysis. [0780])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the transformation technique from a time domain into the distance domain of Cella to the vehicle motion profile of Stein. This combination would enable the system of Stein to interpret data in terms of physical space rather than just frequency components which would increase the localization of features.
Regarding claim 31, Stein discloses the method as described in claim 27. However, Stein does not disclose identifying one or more clusters within the first vehicle motion profiles, wherein the trained statistical model is generated using the one or more clusters.
Cella discloses identifying one or more clusters within the first vehicle motion profiles, wherein the trained statistical model is generated using the one or more clusters. (For example, some of the analysis techniques used in unsupervised learning may include K-means clustering, Gaussian mixture models, Hidden Markov models, and the like. The algorithms used in supervised and unsupervised learning methods of pattern recognition enable the use of pattern recognition in various high precision applications paragraph [0340])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the clustering technique of Cella to the statistical model of Stein. This would enable the system of Stein to improve road feature classification accuracy by automatically segmenting different road types.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Stein in view of Cella, and further in view of US 20050046137 A1 hereinafter Dreff.
Regarding claim 9, Stein and Cella disclose the method as described in claim 6. However, Stein and Cella do not explicitly disclose where the frequency of wheel-hop is between 10 and 15 Hz.
Dreff discloses where the frequency of wheel-hop is between 10 and 15 Hz. (The shock absorber is used to attenuate both the low frequency ride modes, which are generally at frequencies less than 2 Hz, and the higher frequency wheel hop and tramp modes, which are typically in the range of 10-15 Hz paragraph [0003])
Response to Arguments
Applicants’ arguments filed 9/11/2025 have been fully considered.
Applicants’ arguments overcome the 101 rejection.
Applicants’ argument “The Examiner cites Stein as teaching the limitations of claim 1, including obtaining a vehicle motion profile, inputting it to a trained statistical model, and outputting road features. Respectfully, Stein does not disclose the limitations of amended claim 1 for at least the following reasons: Obtaining vehicle motion profiles from sensors disposed on a portion of the vehicle. Amended claim 1 requires obtaining one or more vehicle motion profiles "determined based at least partially on measurements of motion with one or more sensors disposed on a portion of the one or more vehicles." A motion profile, as described in the specification, is a structured time- or distance-series of motion data of a wheel, suspension, or other vehicle portion. Stein, by contrast, uses image sequences from cameras in combination with ego-motion estimates for environment reconstruction. Ego-motion in Stein refers to relative camera motion and is not a motion profile of a portion of the vehicle measured by sensors. Stein neither discloses nor suggests generating a vehicle-portion motion profile from sensor-based motion measurements.” Filed 9/11/2025 have been fully considered but are not persuasive.
Applicant contends that Stein relies solely on image data and therefore fails to teach obtaining vehicle motion profiles from sensor based measurements. However, stein explicitly provides motion information sensors and further relies on ego motion estimation using the scene that is captured by a camera. Additionally, the camera is a sensor that measures physical state and therefore constitutes a motion sensing device.
Furthermore, Stein extracts motion parameters by comparing sequential sensor frames to infer velocity, displacement, and scene structure. These measurements form a motion profile of the vehicle portion, even if the reference uses image-domain data as the sensing modality. The claims do not require inertial claims do not require inertial sensors or limit the type of sensors used and only state that motion profiles are obtained based on measurements using one or more sensors which stein inherently performs.
Applicants’ argument “Amended claim 1 further requires "training a statistical model using the vehicle motion profiles, the trained statistical model being configured, when trained, to identify or classify road features." Stein does not disclose this limitation. Stein merely references artificial neural networks (ANNs) and deep neural networks (DNNs) in connection with processing image data to generate outputs such as a gamma image. These references do not describe a statistical model trained on vehicle motion profiles as expressly required by the claims. Instead, Stein's networks are trained solely on image-based data for inferring road surface geometry. Stein neither discloses nor suggests training any model on structured, sensor-derived vehicle motion profiles of a vehicle portion. The claimed approach-training a statistical model using motion profiles rather than image data-is entirely absent from Stein. For at least these reasons, Stein does not disclose the limitations of claim 1.” Filed 9/11/2025 has been fully considered but is not persuasive.
Stein discloses training neural networks using motion correlated gamma information
derived from sequential sensor inputs. During training he network generates gamma maps, compares residual motion between frames, and updates weights accordingly. Because motion is inferred from sensor data; the training necessarily utilizes a motion profile signal.
Applicants’ argument “The Examiner also asserts that Stein discloses claim 21, including obtaining first and second vehicle motion profiles, generating a trained statistical model from them, and storing the model. Respectfully, Stein fails to disclose multiple limitations of amended claim 21: Obtaining first and second vehicle motion profiles from sensors. Amended claim 21 explicitly requires obtaining first and second vehicle motion profiles, each "obtained based at least partially on measurements of motion with one or more sensors disposed on a portion of the one or more vehicles." The first profiles are associated with a road segment including one or more road features, and the second profiles are associated with a segment having an absence of such features. This distinction is critical because it enables supervised training of statistical models using vehicle motion profiles with and without certain road features. Stein discloses no such comparative dataset. As discussed above in reference to claim 1, Stein does not disclose collecting any sensor-derived motion profiles of vehicle portions, much less contrasting datasets of profiles with and without features. Stein's "training data" consists of camera image sequences, not vehicle motion profiles.” Filed 9/11/2025 has been fully considered but is not persuasive.
Applicant asserts that Stein lacks distinct motion profiles for feature present and feature
absent road segments. However, Stein teaches collecting training data from multiple roads across varying environments. Roads inherently differ in surface condition, meaning the resulting sensor based motion signals will naturally include instances with discernable features such as texture changes, elevation shifts, and surface irregularities, as well as smooth or feature-minimal regions. Thus, when Stein obtains sensor data over different road surfaces, each traversal constitutes a vehicle motion profile, and variation in topology naturally results in first and second profiles corresponding to feature present and feature reduced segments.
Applicants’ argument “Claim 21 further requires "generating a trained statistical model using the first vehicle motion profiles and the second vehicle motion profiles." Stein's models are trained on image data (e.g., 3D reconstructions and gamma images). There is no disclosure of training a statistical model using sensor-based vehicle motion profiles as expressly required. Storing the trained statistical model generated from motion profiles. Although Stein generically references storing data or models, it does not disclose storing a statistical model trained from vehicle motion profiles. The claimed model, generated from sensor- based positive and negative motion profiles, is absent from Stein. Accordingly, Stein does not anticipate amended claim 21. In summary, Stein is directed to computer-vision methods using camera imagery and ego- motion for road surface modeling. By contrast, the amended claims are directed to training statistical models using sensor-derived vehicle motion profiles. Stein does not disclose or suggest this approach. Accordingly, Stein cannot anticipate amended independent claims 1 or 21.” Filed 9/11/2025 has been fully considered but is not persuasive.
Applicant contends that Stein does not classify road features from motion profiles.
However, Stein provides road feature identification including speed bumps, potholes, manhole covers, curbs, and reflective region. The neural network processes motion dependent gamma maps to detect deviations from the planar model and outputs 3D scene characteristics. Because classification is derived from changes in motion correlated gamma structures, Stein therefore teaches identifying and classifying road features in response to motion profile data.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. WO 2018204656 A1 teaches detection and classification systems and methods for autonomous vehicle navigation.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner sh