DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 3, 9, 10, 16, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 2, the term “smooth” is a relative term which renders the claim indefinite. The term “smooth” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The claim and specification are unclear as to what “road feel” would be considered smooth.
Regarding Claim 2, the term “responsive” is a relative term which renders the claim indefinite. The term “responsive” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The claim and specification are unclear as to what “road feel” would be considered responsive.
Regarding Claims 9, 10, 16, and 17, the claims recite substantially similar limitations to Claims 2, 3, 2, and 3, respectively, and the claims are rejected under 35 U.S.C 112(b) for the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-5, 7-12, and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuehner et al. (U.S. Pub. No. 2023/0311923 A1, filed March 29, 2022), hereinafter Kuehner, in view of Chen et al. (Chen, Ning, Zijin Xu, Zhuo Liu, Yihan Chen, Yinghao Miao, Qiuhan Li, Yue Hou, and Linbing Wang. "Data augmentation and intelligent recognition in pavement texture using a deep learning." IEEE Transactions on Intelligent Transportation Systems 23, no. 12 (2022): 25427-25436. Published January 25, 2022.), hereinafter Chen.
Regarding Claim 1, Kuehner teaches A method for use in a vehicle (“FIG. 9 illustrates a flowchart of a method 900 that is associated with simulating a vehicle response to a rumble strip.”) (e.g., paragraph [0078]).
the method comprising: obtaining first image data from a camera (“At 910, the rumble strip simulator system 170 determines that a virtual boundary corresponding to a real world location in proximity of the vehicle 100 has been crossed based upon first sensor data generated by the vehicle 100 [...] As an example, in one or more arrangements, the sensor system 120 can include one or more radar sensors 123, one or more LIDAR sensors 124, one or more sonar sensors 125, and/or one or more cameras 126.”) (e.g., paragraphs [0079] and [0095]).
obtaining second image data from a light detection and ranging (LIDAR) sensor (“At 920, the rumble strip simulator system 170 determines information about the environment of the vehicle 100 as the virtual boundary is crossed based upon second sensor data generated by the vehicle 100 [...] As an example, in one or more arrangements, the sensor system 120 can include one or more radar sensors 123, one or more LIDAR sensors 124, one or more sonar sensors 125, and/or one or more cameras 126.”) (e.g., paragraphs [0080] and [0095]).
and generating haptic vibrations based on the determined road surface to create a road feel (“At 930, the rumble strip simulator system 170 activates an actuator of a seat of the vehicle 100 such that haptic feedback is delivered to the seat. The seat may be a seat in which the operator of the vehicle 100 sits. The haptic feedback is based upon the information about the environment of the vehicle 100 and a type of a virtual rumble strip.”) (e.g., paragraph [0080]).
However, Kuehner does not appear to specifically teach determining a road surface based on the first image data and the second image data using a generative adversarial network (GAN) machine learning (ML) model;
On the other hand, Chen, which relates to determining pavement texture for autonomous vehicles, does teach determining a road surface based on the first image data and the second image data using a generative adversarial network (GAN) machine learning (ML) model (“First, the pavement texture images were preprocessed. Then the traditional methods and WGAN-GP network were used for data augmentation. Finally, RF algorithm and the deep learning model DenseNet were applied for pavement texture image classification. The detailed computation process is shown in Fig. 1.”) (e.g., page 2, column 2, paragraph 1).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the Applicant's claimed invention to combine Kuehner with Chen. The Claimed invention is considered to be using a known technique to improve similar devices (methods, or products) in the same way, see MPEP § 2143(I)(C). Kuehner teaches a method for collecting camera and LIDAR data and generating haptic vibrations to simulate a vehicle crossing a nearby rumble strip. However, Kuehner does not appear to specifically teach using a GAN model to determine the road surface. On the other hand, Chen, which relates similarly as a method for use in autonomous vehicles, does teach training a GAN model and determining a road surface using the GAN model. Furthermore, Kuehner discloses that “one or more of the modules described herein can include artificial or computational intelligence elements, e.g., neural network, fuzzy logic or other machine learning algorithms” (e.g., Kuehner; paragraph [0104]). Thus, one of ordinary skill in the art could have applied the improvement of using a GAN model as in Chen to the haptic feedback system of Kuehner, and the results of the improvement would have been predictable to one of ordinary skill in the art. Therefore, it would have been obvious to a person of ordinary skill in the art to combine Kuehner with Chen in order to more accurately assess a road texture.
Regarding Claim 2, Kuehner in view of Chen teaches The method of claim 1. Kuehner further teaches the method further comprising: adjusting a timing of the first image data such that the road feel created is smooth (“In an example, if the vehicle 100 is traveling relatively fast when the virtual boundary is crossed, the actuator 340 delivers vibrations at a relatively high frequency. In contrast, if the vehicle 100 is traveling relatively slow when the virtual boundary is crossed, the actuator 340 delivers vibrations at a relatively low frequency.” A low vibration frequency is interpreted as a smoother feel, wherein the sampling of the sensor data may be slowed to match the vehicle speed.) (e.g., paragraph [0058]).
Regarding Claim 3, Kuehner in view of Chen teaches The method of claim 1. Kuehner further teaches the method further comprising: adjusting a timing of the first image data such that the road feel created is responsive. (“In an example, if the vehicle 100 is traveling relatively fast when the virtual boundary is crossed, the actuator 340 delivers vibrations at a relatively high frequency. In contrast, if the vehicle 100 is traveling relatively slow when the virtual boundary is crossed, the actuator 340 delivers vibrations at a relatively low frequency.” A high vibration frequency is interpreted as a more responsive feel, wherein the sampling of the sensor data may be sped up to match the vehicle speed.) (e.g., paragraph [0058]).
Regarding Claim 4, Kuehner in view of Chen teaches The method of claim 1. Kuehner further teaches wherein the haptic vibrations are generated on a steering wheel of the vehicle, a pedal of the vehicle, a seat of the vehicle, or a gear shifter of the vehicle (The Examiner notes the use of or, and the prior art provides haptic vibrations generated on a seat of the vehicle. “At 930, the rumble strip simulator system 170 activates an actuator of a seat of the vehicle 100 such that haptic feedback is delivered to the seat. The seat may be a seat in which the operator of the vehicle 100 sits. The haptic feedback is based upon the information about the environment of the vehicle 100 and a type of a virtual rumble strip.”) (e.g., paragraph [0081]).
Regarding Claim 5, Kuehner in view of Chen teaches The method of claim 1. Kuehner further teaches the method further comprising: obtaining sensor data from one or more sensors of a test vehicle (“As an example, in one or more arrangements, the sensor system 120 can include one or more radar sensors 123, one or more LIDAR sensors 124, one or more sonar sensors 125, and/or one or more cameras 126.”) (e.g., paragraph [0095]).
obtaining third image data associated with a camera image of a training data set (“At 910, the rumble strip simulator system 170 determines that a virtual boundary corresponding to a real world location in proximity of the vehicle 100 has been crossed based upon first sensor data generated by the vehicle 100 [...] As an example, in one or more arrangements, the sensor system 120 can include one or more radar sensors 123, one or more LIDAR sensors 124, one or more sonar sensors 125, and/or one or more cameras 126.” Camera data may be collected again in the same way to form a training data set.) (e.g., paragraphs [0079] and [0095]).
obtaining fourth image data associated with a LIDAR image of the training data set (“At 920, the rumble strip simulator system 170 determines information about the environment of the vehicle 100 as the virtual boundary is crossed based upon second sensor data generated by the vehicle 100 [...] As an example, in one or more arrangements, the sensor system 120 can include one or more radar sensors 123, one or more LIDAR sensors 124, one or more sonar sensors 125, and/or one or more cameras 126.” LIDAR data may be collected again in the same way to form a training data set.) (e.g., paragraphs [0080] and [0095]).
Chen further teaches correlating the sensor data with the third image data and the fourth image data to train the GAN ML model (“Before training, the image processing methods were used to extract the contour information, texture features and histogram information from the pavement texture datasets […] In this study, WGAN-GP was employed for pavement texture data augmentation. After the generator and discriminator were updated continuously, the trained model was able to generate new pavement texture images.” The image processing methods may be used to correlate the camera, LIDAR, and other sensor data of Kuehner. Updating the generator and discriminator describes training the GAN model, wherein the training is performed using the processed data.) (e.g., page 2, column 2, paragraph 3; page 3, column 2, paragraph 2).
Regarding Claim 7, Kuehner in view of Chen teaches The method of claim 5. Kuehner further teaches the method further comprising: generating a three-dimensional (3D) terrain model based on the LIDAR image (“The autonomous driving module(s) 160 can be configured to receive data from the sensor system 120 and/or any other type of system capable of capturing information relating to the vehicle 100 and/or the external environment of the vehicle 100. In one or more arrangements, the autonomous driving module(s) 160 can use such data to generate one or more driving scene models.”) (e.g., paragraph [0105]).
Chen further teaches correlating the sensor data with the 3D terrain model to train the GAN ML model (“Before training, the image processing methods were used to extract the contour information, texture features and histogram information from the pavement texture datasets […] In this study, WGAN-GP was employed for pavement texture data augmentation. After the generator and discriminator were updated continuously, the trained model was able to generate new pavement texture images.” The image processing methods may be used to correlate the driving scenes and sensor data of Kuehner. Updating the generator and discriminator describes training the GAN model, wherein the training is performed using the processed data.) (e.g., page 2, column 2, paragraph 3; page 3, column 2, paragraph 2).
Regarding Claim 8, Kuehner teaches A vehicle (“Referring to FIG. 1, an example of a vehicle 100 is illustrated.”) (e.g., paragraph [0023]).
The remaining limitations of Claim 8 recite substantially similar material to Claim 1, and the claim is rejected under 35 U.S.C 103 for the same reasons.
Regarding Claims 9-12 and 14, the Claims recite substantially similar limitations to Claims 2-5 and 7, respectively, and the claims are rejected under 35 U.S.C 103 for the same reasons.
Regarding Claim 15, Kuehner teaches A non-transitory computer-readable medium comprising instructions stored in a memory, that when executed by a processor, cause the processor to perform operations (“The vehicle 100 can include one or more data stores 115 for storing one or more types of data. The data store 115 can include volatile and/or non-volatile memory.”) (e.g., paragraph [0084]).
The remaining limitations of Claim 15 recite substantially similar material to Claim 1, and the claim is rejected under 35 U.S.C 103 for the same reasons.
Regarding Claims 16-19 and 20, the claims recite substantially similar limitations to Claims 2-5 and 7, respectively, and the claims are rejected under 35 U.S.C 103 for the same reasons.
Claim(s) 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuehner in view of Chen, further in view of Amorim de Faria Cardote et al. (U.S. Pub. No. 2017/0195953 A1), hereinafter Amorim.
Regarding Claim 6, Kuehner in view of Chen teaches The method of Claim 5. However, neither Kuehner nor Chen appear to specifically teach wherein the third image data and the fourth image data are obtained at 800 Hz.
On the other hand, Amorim, which relates configuring hardware for mobile internet of things (IoT) devices, does teach wherein the third image data and the fourth image data are obtained at 800 Hz (“In an example implementation in accordance with various aspects of the present disclosure, the internal sensor(s) 708 and/or external sensors 710 may comprise an accelerometer with the following features: […] Output Data Rates (ODR) in the range of from about 1.56 Hz to about 800 Hz.”) (e.g., paragraph [0167]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the Applicant's claimed invention to combine the modified reference of Kuehner in view of Chen with Amorim. The claimed invention is considered to be merely combining prior art elements according to known methods to yield predictable results, see MPEP § 2143(I)(A). Kuehner teaches a method using cameras and LIDAR sensors. However, Kuehner does not specifically teach wherein the camera and LIDAR data is captured at a minimum of 800 Hz. On the other hand, Amorim does teach a method comprising a sensor with a sampling rate of up to 800 Hz. Kuehner is merely silent on specifically how the camera and LIDAR sensors function, and one of ordinary skill in the art could have merely implemented the sensors using the specific sampling rate of Amorim, and the results would have been predictable to one of ordinary skill in the art. Furthermore, while the range of sampling rates in Amorim is not exactly the same as in the instant claims, a prima facie case of obviousness exists where the claimed ranges and prior art ranges do not overlap but are close enough that one skilled in the art would have expected them to have the same properties. “[W]here the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation.” See In re Aller, 220 F.2d 454, 456, 105 USPQ 233, 235 (CCPA 1955). The discovery of an optimum value of a known result effective variable, without producing any new or unexpected results, is within the ambit of a person of ordinary skill in the art. See In re Boesch, 205 USPQ 215 (CCPA 1980) (see MPEP § 2144.05, II). In the instant claims, the 800 Hz sampling frequency does not produce any new or expected results in the training of the GAN model. Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to combine the modified reference of Kuehner in view of Chen with Amorim in order to select a specific sampling rate for the sensors of Kuehner.
Regarding Claim 13, the claim recites substantially similar limitations to Claim 6, and the claim is rejected under 35 U.S.C 103 for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chugh et al. (U.S. Pub. No. 2021/0291731 A1) teaches a method for providing haptic feedback to a steering wheel of a steer by wire vehicle.
Mok et al. (Mok, Seung-chan, and Gon-woo Kim. "Simulated intensity rendering of 3D LiDAR using generative adversarial network." In 2021 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 295-297. IEEE, 2021.) teaches a method for generating simulated renderings of 3D LIDAR using a GAN.
Sprinzl et al. (U.S. Pub. No. 2024/0174291 A1, effectively filed December 14, 2021) teaches a method for actuating a steering wheel in a steer by wire vehicle.
Zhong et al. (Zhong, Chuanchuan, Bowen Li, and Tao Wu. "Off-road drivable area detection: A learning-based approach exploiting lidar reflection texture information." Remote Sensing 15, no. 1 (2022): 27.) teaches a method for detecting drivable off-road areas using LIDAR and a machine learning network.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE HWA-KAI TSENG whose telephone number is (571)272-3731. The examiner can normally be reached M-F 9A-5P PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at (571) 272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.H.T./ Examiner, Art Unit 2189
/REHANA PERVEEN/ Supervisory Patent Examiner, Art Unit 2189