DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claims 13-20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected group, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 01/02/2026.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Pfeiffer (US 2023/0055888 A1) in view of Sheu et al., hereinafter referred to as Sheu (US 2022/0027684 A1).
As per claim 1, Pfeiffer discloses an agricultural vehicle (Pfeiffer: Abstract; Fig.2, 202), comprising:
cameras operably coupled to the agricultural vehicle (Pfeiffer: Paras. [0042]-[0043] disclose vehicle 202 can be an agricultural vehicle and includes one or more image capture devices 102.);
a LIDAR sensor operably coupled to the agricultural vehicle (Pfeiffer: Para. [0043] discloses one or more sensors 104 can be one or more LIDAR sensors mounted on the vehicle.); and
an imaging controller operably coupled to the LiDAR sensor and the cameras (Pfeiffer: Paras. [0045], [0095] disclose a computer system 302/1102 configured to implement modules 304 [LiDAR], 306 [Image] coupled to the sensors.), the imaging controller comprising:
at least one processor (Para. [0094] discloses at least one processor.); and
at least one non-transitory computer-readable storage medium having instructions thereon that, when executed by the at least one processor, cause the imaging controller to (Pfeiffer: Paras. [0094]-[0095] disclose a computer systems 1102 that includes storage 1104, one or more processors 1106, and memory 1108 including an operating system 1110.):
receive image data from the cameras (Pfeiffer: Para. [0161] discloses receiving a plurality of images comprising image data generated by an image capture device.);
receive LiDAR data from the LiDAR sensor (Pfeiffer: Para. [0086] discloses receiving sensor data generated by a LIDAR sensor.);
analyze the image data from each of the cameras using one or more agricultural object detection neural networks (ANN) trained with an agricultural dataset to generate labeled image data (Pfeiffer: Para. [0043] discloses the vehicle [agricultural] collecting data to generate a training dataset via cameras and Pfeiffer: Paras. [0050], [0053]-[0054] disclose generating labelled image data and a segmentation module 310 performing image segmentation to utilize an artificial neural network trained to segment objects represented in the data [agricultural dataset] for subsequent image classification.);
However, Pfeiffer does not explicitly disclose “… analyze the LiDAR data from the LiDAR sensor using a LIDAR agricultural object detection neural network trained with another agricultural dataset to generate labeled LiDAR data; and fuse the labeled image data with the labeled LiDAR data.”
Further, Sheu is in the same field of endeavor and teaches analyze the LiDAR data from the LiDAR sensor using a LIDAR agricultural object detection neural network trained with another agricultural dataset to generate labeled LiDAR data (Sheu: Paras. [0001], [0032]-[0033] disclose analyze the LiDAR data from the LiDAR sensor using a neural network trained with a LiDAR dataset to generate labeled LiDAR data.); and
fuse the labeled image data with the labeled LiDAR data (Sheu: Para. [0033] discloses fusing the labeled image data with the labeled LiDAR data.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Pfeiffer and Sheu before him or her, to modify the vehicle camera LiDAR system of Pfeiffer to include the fusing of different labeled data feature as described in Sheu. The motivation for doing so would have been to improve machine learning algorithms by providing a configuration that enables classification tasks across a broader range of object types with enhanced accuracy.
As per claim 2, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1, wherein each camera has a different field of view than other cameras (Pfeiffer: Para. [0065] discloses each image capture device 102A-102C has a slightly different field of view 610 than the other image capture devices 102.).
As per claim 3, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1, wherein a field of view of at least one camera overlaps at least a portion of a field of view of at least another camera (Pfeiffer: Para. [0020] discloses the field of view of the image capture device and the field of view of the other sensor may at least partially overlap.).
As per claim 9, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1, wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to fuse segmented image data with the labeled LiDAR data (Pfeiffer: [0028] discloses capturing images and sensor data, segmenting the captured images, projecting the sensor data onto the segmented images; and Sheu: Para. [0033] discloses fusing image data with the labeled LiDAR data.).
As per claim 10, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1, wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to perform one or more object segmentation operations on the fused data (Pfeiffer: [0028] discloses capturing images and sensor data, segmenting the captured images, projecting the sensor data onto the segmented images; and Sheu: Para. [0033] discloses fusing image data with the labeled LiDAR data.).
As per claim 12, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1, further comprising at least one additional controller configured to perform one or more control operations of the agricultural vehicle based on the fused data (Pfeiffer: Para. [0105] discloses input and output devices may include one or more image capture devices, controllers, microcontrollers, and/or other processors to control automotive functions, such as, but not limited to, acceleration, braking, and steering; and Sheu: Para. [0033] discloses the fused data.).
Claims 4, 6 are rejected under 35 U.S.C. 103 as being unpatentable over Pfeiffer in view of Sheu in further view of Oblak et al., hereinafter referred to as Oblak (US 2022/0180131 A1).
As per claim 4, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1 (Pfeiffer: Abstract),
However, Pfeiffer-Sheu do not explicitly disclose “… wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to analyze the image data from at least one camera using an agricultural object detection neural networks different than another agricultural object detection neural networks used to analyze the image data from at least another camera.”.
Furthermore, Oblak is in the same field of endeavor and teaches wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to analyze the image data from at least one camera using an agricultural object detection neural networks different than another agricultural object detection neural networks used to analyze the image data from at least another camera (Oblak: Para. [0027] discloses using different cameras 40, such a wide view cameras associated with the side of the vehicle and cameras associated with the rear and front. Further, Oblak: Paras. [0048], [0060] disclose using different neural networks to analyze images from the different cameras 40.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Pfeiffer-Sheu and Oblak before him or her, to modify the vehicle camera LiDAR system of Pfeiffer-Sheu to include the different neural networks feature as described in Oblak. The motivation for doing so would have been to improve object detection and classification by providing a configuration that enables high accurate imaging in targeted areas of the environment.
As per claim 6, Pfeiffer-Sheu-Oblak disclose the agricultural vehicle of claim 1, wherein the labeled image data comprises only image data from a camera having a relatively narrower field of view in regions having overlapping fields of view of two or more cameras (Pfeiffer: Para. [0065] discloses each image capture device 102A-102C has a slightly different field of view 610 than the other image capture devices 102 and Oblak: Para. [0027] discloses using different cameras 40, such a wide view cameras associated with the side of the vehicle and cameras associated with the rear and front.).
Claims 5, 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Pfeiffer in view of Sheu in further view of Wyffels (US 2024/0185434 A1).
As per claim 5, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1 (Pfeiffer: Abstract), wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to identify (Pfeiffer: Paras. [0042]-[0043], [0050], [0053] disclose generating labelled image data and identifying the data for image classification from more than two cameras.).
However, Pfeiffer-Sheu do not explicitly disclose “… to identify pixels in the labeled image data including image data from two or more cameras that do not agree with one another.”
Furthermore, Wyffels is in the same field of endeavor and teaches to identify pixels in the labeled image data including image data from two or more cameras that do not agree with one another (Wyffels: Paras. [0127], [0160], [0165] disclose cameras of the AV have overlapping fields of view (FOV), where some LiDAR points may be visible from multiple cameras and analyzing the consistency of these points across sensors, wherein feature (i) is the difference in which image captures each point is located. Each point has a per-camera distribution of image detections. The information from all cameras is probabilistically combined into a single number indicating whether or not the points are likely to be included in the same image capture, calculating agreement or compatibility metrics. Further, Equation 15 calculates a weight `wIDC` based on the compatibility of detections across the set of all cameras `C`. Therefore, identifying data that agrees or does not agree.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Pfeiffer-Sheu and Wyffels before him or her, to modify the vehicle camera LiDAR system of Pfeiffer-Sheu to include the pixel/point identification and compatibility analysis feature as described in Wyffels. The motivation for doing so would have been to improve the accuracy and reliability of the imaging controller's output in a system utilizing multiple sensors with overlapping fields of view to resolve occlusions, filter noise, and ensure high-confidence data fusion for the labeled image data.
As per claim 7, Pfeiffer-Sheu-Wyffels disclose the agricultural vehicle of claim 1 (Pfeiffer: Abstract), wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to cause the cameras to generate the image data at different times (Wyffels: Para. [0127] discloses the cameras are configured to fire as a LiDAR system sweeps over the center of the camera's FOV.).
As per claim 8, Pfeiffer-Sheu-Wyffels disclose the agricultural vehicle of claim 1, wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to combine image data from each camera prior to analyzing the image data (Wyffels: Para. [0137] discloses performing local variation segmentation using the outputs of the LiDAR-to-Image object detection operations to create a plurality of segments of LiDAR data points; performing segment merging operations to merge the plurality of segments of LiDAR data points into objects; and performing segment filtering operations to detect objects in the point cloud defined by the LiDAR dataset.).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Pfeiffer in view of Sheu in further view of Davis (US 2023/0418293 A1).
As per claim 11, Pfeiffer-Sheu disclose the agricultural vehicle of claim 1 (Pfeiffer: Abstract),
However, Pfeiffer-Sheu do not explicitly disclose “… wherein the image data comprises RGB data and at least one of SWIR data and LWIR data.”
Further, Davis is in the same field of endeavor and teaches wherein the image data comprises RGB data and at least one of SWIR data and LWIR data (Davis: Para. [0017] discloses wherein the image data comprises RGB data and at least one of SWIR data and LWIR data.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Pfeiffer-Sheu and Davis before him or her, to modify the vehicle camera LiDAR system of Pfeiffer-Sheu to include the image data comprising RGB data and at least one of SWIR data and LWIR data feature as described in Davis. The motivation for doing so would have been to improve object detection by providing a configuration that utilizes supplemental classification for advanced imaging devices.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be viewed in the list of references.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEET DHILLON whose telephone number is (571)270-5647. The examiner can normally be reached M-F: 5am-1:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V. Perungavoor can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEET DHILLON/Primary Examiner
Art Unit: 2488
Date: 02-07-2026