Prosecution Insights
Last updated: April 19, 2026
Application No. 18/922,252

Agricultural Vehicle with Enhanced Operation and Safety, and Related Methods

Non-Final OA §101§102§103§112
Filed
Oct 21, 2024
Examiner
ANDA, JENNIFER MARIE
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Agco International GmbH
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
95 granted / 134 resolved
+18.9% vs TC avg
Strong +29% interview lift
Without
With
+29.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
171
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
34.6%
-5.4% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
30.3%
-9.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the application filed 21 October 2024. Claims 1-20 are currently pending and have been examined. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDSs) submitted on 23 January 2025 and 31 January 2025 have been considered by the examiner and initialed copies of the IDSs are hereby attached. Claim Objections Applicant is advised that should claim 9 be found allowable, claim 10 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “fuse the LiDAR data and the image data based at least partially on the relative angular value” and “create a fused instance including depth information”. It is not clear to the examiner what “create a fused instance” is referring to and how that is differentiated from fusing the LIDAR data and the image data. Claims 16 and 19 have a similar recitation and are rejected for the same reasons. Claim 4 recites “on agricultural object data” in line 2. Claim 4 depends from claim 3 which previously recited “agricultural object data” in line 3. It is not clear if the “agricultural object data” of claim 4 is the same or different data than that recited in claim 3. Claim 7 recites “wherein the instructions, when executed by the at least one processor, cause the imaging controller to perform an instance segmentation operation on the image data.” Claim 7 depends from claim 1 which recites “segment the image data”. It is not clear if the segmenting of claim 7 further limits the segmenting of claim 1 or is a separate segmenting step. Claim 9 recites “wherein the portion is based at least partially on an angular position of the LiDAR sensor during exposure of the camera sensor.” It is not clear what the angular position is determined with respect to and whether it is with respect to the vehicle, the FOV, the camera, or something else. Further claim 9 depends from claim 1 which recites “the LiDAR data including an angular value” and “determine a relative angular value of the LiDAR data with respect to the CFOV”. It is not clear the angular position is the same or different than the respective angular value as recited in claim 1. Claim 10 appears to be a duplicate claim and is rejected for the same reason. Claim 13 recites “determine a distance of the fused instance to the agricultural vehicle”. The examiner interprets the fused instance to be the fused data. It is not clear what the distance is being determined with respect to. The examiner believes that this is the distance to the detected object based on the fused data, however this is not clear from the claim as written. Claim 20 recites “a vehicle controller” in line 2. Claim 20 depends from claim 19 which recites “a vehicle controller” in line 20. It is not clear the vehicle controller of claim 20 is the same or different than that recited in claim 19. The examiner recommends reciting “the vehicle controller in claim 20. Claims 2-15 depend from claim 1 and are similarly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based on their dependency on claim 1. Claims 17-18 depend from claim 16 and are similarly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based on their dependency on claim 16. Claim 20 depend from claim 19 and is similarly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based on its dependency on claim 19. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Following the 2019 Revised Patent Subject Matter Eligibility Guidance (84 Fed. Reg. 50-57 and MPEP § 2106, hereinafter 2019 Guidance), the claim(s) appear to recite at least one abstract idea, as explained in the Step 2A, Prong I analysis below. Furthermore, the judicial exception(s) does/do not appear to be integrated into a practical application as explained in the Step 2A, Prong II analysis below. Further still, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s) as explained in the Step 2B analysis below. STEP 1: Step 1, of the 2019 Guidance, first looks to whether the claimed invention is directed to a statutory category, namely a process, machine, manufactures, and compositions of matter. Claim 16 is directed toward a method of controlling an agricultural vehicle and is therefore eligible for further analysis. Claim 19 is directed toward a device for controlling an agricultural vehicle. STEP 2A, PRONG I: Step 2A, prong I, of the 2019 Guidance, first looks to whether the claimed invention recites any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activities such as a fundamental economic practice, or mental processes). Independent claim 19 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim(s) for the remainder of the 101 rejection. Claim 19 recites: A device for controlling an agricultural vehicle, the device comprising: a camera sensor having a camera field of view (CFOV), the camera sensor configured to be operably coupled to an agricultural vehicle; a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor configured to be operably coupled to the agricultural vehicle; and an imaging controller in data communication with the camera sensor and the LiDAR sensor, the imaging controller comprising: at least one processor; and at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor cause the imaging controller to: receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV; determine a relative angular value of the LiDAR data with respect to the CFOV; receive image data from the camera sensor; segment the image data; fuse the LiDAR data and the image data based at least partially on the relative angular value; create a fused instance including depth information; and provide at least the fused instance to a vehicle controller. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. Specifically, the “determine a relative angular value of the LiDAR data with respect to the CFOV” “segment the image data;”, “fuse the LiDAR data and the image data based at least partially on the relative angular value;” and” create a fused instance including depth information” steps encompasses having lidar data and image data, comparing the field of views to determine the relative angular value with pen and paper, taking a portion of the relevant image data wherein the camera FOV overlaps with the lidar FOV with the field of view of the lidar data, further reviewing the lidar data and the portion of image data provided for example on a screen and/or paper and determining based on the combination (fusing) of the lidar data and image data an overlap of the lidar data points and image data points to determine an object location, including depth or distance information based on the provided lidar data. STEP 2A, PRONG II: Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): Claim 19 recites: A device for controlling an agricultural vehicle, the device comprising: a camera sensor having a camera field of view (CFOV), the camera sensor configured to be operably coupled to an agricultural vehicle; a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor configured to be operably coupled to the agricultural vehicle; and an imaging controller in data communication with the camera sensor and the LiDAR sensor, the imaging controller comprising: at least one processor; and at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor cause the imaging controller to: receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV; determine a relative angular value of the LiDAR data with respect to the CFOV; receive image data from the camera sensor; segment the image data; fuse the LiDAR data and the image data based at least partially on the relative angular value; create a fused instance including depth information; and provide at least the fused instance to a vehicle controller. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application: Regarding the additional limitations of “a camera sensor having a camera field of view (CFOV), the camera sensor configured to be operably coupled to an agricultural vehicle”, “a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor configured to be operably coupled to the agricultural vehicle”, “an imaging controller in data communication with the camera sensor and the LiDAR sensor”, “at least one processor”, “at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor”, “receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV”, receive image data from the camera sensor;” and “provide at least the fused instance to a vehicle controller” the examiner submits that these limitations merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use and do not integrate a judicial exception into a “practical application”. Specifically, the courts have held that merely reciting the works “apply it” (or an equivalent) with the judicial exception, or merely including or are more than mere instructions to implement an abstract idea on a computer, or merely using the computer as a tool to perform an abstract idea, does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). The additional limitations of “a camera sensor having a camera field of view (CFOV), the camera sensor configured to be operably coupled to an agricultural vehicle”, “a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor configured to be operably coupled to the agricultural vehicle”, “an imaging controller in data communication with the camera sensor and the LiDAR sensor”, “at least one processor”, and “at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor” are recited at a high level of generality that merely automate the “determining”, “segmenting”, “fusing”, and “creating” steps, therefore acting as a generic computer or generic components such as processors, memory, instructions and sensors that are simply employed as a tool to perform the abstract idea. Therefore these additional limitations are no more than mere instructions to apply the exception using a general purpose computer or generic components (see [0002], [0004] [0056] and [0058-0059], [0061-0062] of the instant application). Further, the limitations of “receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV”, receive image data from the camera sensor;” and “provide at least the fused instance to a vehicle controller” are recited at a high level of generality (i.e. as a general means of data gathering or data output) and amounts to mere data gathering, which is a form of insignificant extra-solution activity. See at least MPEP 2106.05(g). Thus, these additional elements merely reflect insignificant extra-solution activity. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. STEP 2B: Regarding Step 2B of the Revised Guidance, the representative independent claim 19 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “a camera sensor having a camera field of view (CFOV), the camera sensor configured to be operably coupled to an agricultural vehicle”, “a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor configured to be operably coupled to the agricultural vehicle”, “an imaging controller in data communication with the camera sensor and the LiDAR sensor”, “at least one processor”, and “at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor” amounts to nothing more than mere instructions to apply the exception using a generic computer or generic components (see [0002], [0004] [0056] and [0058-0059], [0061-0062] of the instant application). Mere instructions to apply an exception using a generic computer or generic components that are simply employed as a tool cannot provide an inventive concept. Further, as discussed above, the additional limitations of “receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV”, receive image data from the camera sensor;” and “provide at least the fused instance to a vehicle controller” the examiner submits are insignificant extra-solution activity. Hence, the claim is not patent eligible. Claim 16 has similar recitations to claim 19 and the analysis above with respect to claim 19 also applies to claim 16. Dependent claim(s) 17-18 and 20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Specifically, the claims only recite limitations further defining the mental process (“selected” step of claim 18) and insignificant extra-solution activity (“collecting” step of claim 17). These limitations are considered mental process steps (e.g. selected) and additional steps that amount to necessary data gathering and/or data output (e.g. collecting). Further, the communication interface is a well-understood, routine and conventional. These additional elements fail to integrate the abstract idea into a practical application because they do not impose meaningful limits on the claimed invention. As such, the additional elements individually and in combination do not amount to significantly more than the abstract idea. Therefore, when considering the combination of elements and the claimed invention as a whole, claims 17-18 and 20are not patent eligible. Accordingly, claims 16-20 are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3-5, 7, 11-20 are is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Benemann et al. (US-20220394156-A1, hereinafter “Benemann”). Regarding claim 1, Benemann discloses an agricultural vehicle (see at least Benemann [0029] “In some examples, the vehicle 102 may be an automobile having four wheels and respective tires for each of the wheels. Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles. The vehicle 102 may be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, or any combination thereof.”), comprising: a propulsion system configured to move the agricultural vehicle (see at least Benemann [0029] “In some examples, the vehicle 102 may be an automobile having four wheels and respective tires for each of the wheels. Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles. The vehicle 102 may be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, or any combination thereof.” See also Figure 11, [0110] “In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive module(s) 1114 and/or other components of the vehicle 1102.”and [0122] “The drive module(s) 1114 can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components…”) a steering system configured to orient the agricultural vehicle (see at least Benemann [0029] “In some examples, the vehicle 102 may be an automobile having four wheels and respective tires for each of the wheels. Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles. The vehicle 102 may be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, or any combination thereof.” See also Figure 11, [0110] “In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive module(s) 1114 and/or other components of the vehicle 1102.”and [0122] “The drive module(s) 1114 can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components…”) a vehicle controller configured to provide at least one navigational command to at least one of the propulsion system and the steering system (see at least Benemann [0029] “In some examples, the vehicle 102 may be an automobile having four wheels and respective tires for each of the wheels. Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles. The vehicle 102 may be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, or any combination thereof.” See also Figure 11, [0110] “In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive module(s) 1114 and/or other components of the vehicle 1102.”and [0122] “The drive module(s) 1114 can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components…” See also [0019] “Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.”) a camera sensor having a camera field of view (CFOV), the camera sensor operably coupled to the agricultural vehicle (see at least Benemann Figure 1, sensors 106 including camera. See also Figure 5A, first field of view 508 from image sensor 502. See also [0105] “In the illustrated example, the vehicle 1102 is an autonomous vehicle; however, the vehicle 1102 could be any other type of vehicle, or any other system having at least an image capture device (e.g., a camera enabled smartphone). [0117] “As another example, the camera sensors can include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle 1102.” See also [0049] “In some examples, the example 500a may include an image sensor 502, a LIDAR sensor 504, and/or a computing system 506 (e.g., the vehicle computing system 108 described with reference to FIG. 1). The image sensor 502 may have a first field of view 508. The LIDAR sensor 504 may have a second field of view 510. In various examples, a field of view may be associated with the portion of the environment sensed by a sensor at a given time. In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.”See also [0051-0052]) a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor operably coupled to the agricultural vehicle (see at least Benemann Figure 1, sensors 106 including LIDAR. See also Figure 5A, second field of view 508 from LIDAR sensor 504. [0117] “For instance, the LIDAR sensors can include individual LIDAR sensors located at the corners, front, back, sides, and/or top of the vehicle 1102.” See also [0049] “In some examples, the example 500a may include an image sensor 502, a LIDAR sensor 504, and/or a computing system 506 (e.g., the vehicle computing system 108 described with reference to FIG. 1). The image sensor 502 may have a first field of view 508. The LIDAR sensor 504 may have a second field of view 510. In various examples, a field of view may be associated with the portion of the environment sensed by a sensor at a given time. In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.”See also [0051-0052]); and an imaging controller in operable communication with the camera sensor and the LiDAR sensor (see at least Figure 5A, computing system 506. See at least [0049] “In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.” See also “[0069] According to some examples, the example 600 may include one or more image processing components 602. In some examples, the image processing component(s) 602 may reside in the vehicle computing system 108 (described herein with reference to FIG. 1), the computing system 506 (described herein with reference to FIG. 5A), the vehicle computing device 1104 (described herein with reference to FIG. 11), and/or the computing devices 1140 (described herein with reference to FIG. 11).” See also [0110] “ In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive module(s) 1114 and/or other components of the vehicle 1102.”), the imaging controller comprising: at least one processor (see at least [0126-0127] “The computing device(s) 1140 can include processor(s) 1144 and a memory 1146 storing a maps(s) component 1148, the timestamp component(s) 110, the sensor association component(s) 418, the sensor synchronization component(s) 512, and/or the image processing component(s) 602…[0127] The processor(s) 1116 of the vehicle 1102 and the processor(s) 1144 of the computing device(s) 1140 can be any suitable processor capable of executing instructions to process data and perform operations as described herein.”); and at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor cause the imaging controller to (see at least [0127-0128] “0127] The processor(s) 1116 of the vehicle 1102 and the processor(s) 1144 of the computing device(s) 1140 can be any suitable processor capable of executing instructions to process data and perform operations as described herein…t 0128] Memory 1118 and 1146 are examples of non-transitory computer-readable media. The memory 1118 and 1146 can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems.): receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV (see at least Benemann [0019], “In some examples, a LIDAR sensor of the vehicle may capture LIDAR data (e.g., LIDAR points) within a second field of view of the LIDAR sensor which may at least partially overlap with the first field of view of the image sensor.” See also [0023] “Furthermore, the computing system may determine a second orientation of the second field of view and/or a second pose associated with the LIDAR sensor. In some examples, the respective orientations and/or the respective poses associated with the image sensor and/or the LIDAR sensor may be tracked. According to some examples, the orientations and/or the poses may be tracked relative to one another. The computing system may use field of view orientation information and/or pose information as an input for causing the image sensor to initiate the rolling shutter image capture of the first field of view, e.g., such that at least a first portion of the first field of view (associated with the image sensor) overlaps at least a second portion of the second field of view (associated with the LIDAR sensor) in accordance with the synchronization condition(s). …See also [0050-0054] for discussion regarding angular value of LFOV “[0050] According to some examples, the second field of view 510 of the LIDAR sensor 504 may move relative to the first field of view 508 of the image sensor 502. In some examples, the second field of view 510 may be rotatable (e.g., about one or more axes). In some non-limiting examples, the second field of view 510 may be rotatable 360 degrees. In some examples, the amount of rotation may be less than 360 degrees.”. See also [0054] “In some examples, the sensor synchronization component(s) 512 may determine a first pose of the image sensor 502. The first pose may include a first position and/or a first orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second pose of the LIDAR sensor 504. The second pose may include a second position and/or a second orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the LIDAR sensor 504. In some examples, the respective poses of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first pose and the second pose may be tracked relative to one another. The sensor synchronization component(s) 512 may use the pose information (e.g., the first pose, the second pose, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514.”) determine a relative angular value of the LiDAR data with respect to the CFOV (see at least Benemann [0050-0054], [0054] “In some examples, the sensor synchronization component(s) 512 may determine a first pose of the image sensor 502. The first pose may include a first position and/or a first orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second pose of the LIDAR sensor 504. The second pose may include a second position and/or a second orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the LIDAR sensor 504. In some examples, the respective poses of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first pose and the second pose may be tracked relative to one another. The sensor synchronization component(s) 512 may use the pose information (e.g., the first pose, the second pose, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514.”); receive image data from the camera sensor (see at least Benemann [0017], “[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture.” See also Figure 1, sensors 106 including camera. See also [0105] [0117] for showing the image sensor may be a camera.) segment the image data (see at least Benemann, Figure 1, scan line of image frame 116, See also [0016-0017] “This disclosure is directed to techniques for adding time data to portions of an image at capture (e.g., scan lines of an image frame during a rolling shutter image capture)…[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture. The rolling shutter image capture may produce scan lines of an image frame. A scan line may include scan line data (e.g., pixel data) and end of line data. The end of line data may indicate an end of the scan line.” See also [0024] and [0046-0054] for more in depth discussion. See also [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.)…”) fuse the LiDAR data and the image data based at least partially on the relative angular value (see at least Benemann [0019] “Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” [0024] The techniques discussed herein can improve a functioning of a computing device in a number of ways. For example, the techniques discussed herein may include adding time data to individual scan lines of an image frame, which may allow a computing device to accurately align (or otherwise associate) pixels of the image frame with LIDAR points, e.g., to achieve accurate multi-modal sensor fusion, sensor calibration, 3D reconstruction, multi-modal calibration, and the like. [0047] As a non-limiting example, the sensor association component(s) 418 may associate, based at least in part on the time data of the scan line, at least a portion of the pixel data of the scan line with one or more LIDAR points 416 of the LIDAR data. In some cases, the association may indicate that the portion of the pixel data and the LIDAR point(s) were captured substantially contemporaneously (e.g., during a time period that satisfies a time threshold). In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1). The examiner notes that the time threshold of Benemann is based on the relative angular value as discussed in [0046-0054].); create a fused instance including depth information (see at least Benemann [0019] and [0047] as cited above and [0071] “According to some examples, the image processing component(s) 602 may receive, as input, distance data 606 associated with one or more scan lines of the rolling shutter image capture. As a non-limiting example, the distance data 606 may be associated with LIDAR data captured by a LIDAR sensor. The LIDAR sensor may capture LIDAR data associated with the object, and the LIDAR data may include distance data 606 representing a distance between the LIDAR sensor and the object at a given time. The distance data 606 may be used to determine a distance between the image sensor and the object at a given time. The image processing component(s) 602 may modify, based at least in part on the distance data 606 and/or time data added to one or more of the scan lines of the image frame associated with the distorted image 604, the distorted image 604 to correct and/or compensate for the distortion effect(s) of the distorted image 604. The image processing component(s) 602 may output a corrected image 608 (e.g., image data associated with a corrected image). In some examples, the corrected image 608 may be distorted less than the distorted image 604, as the image processing component(s) 602 have corrected and/or compensated for the distortion effect(s) of the distorted image 604.”); and provide at least the fused instance to the vehicle controller (see at least Benemann [0019] Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” See also [0047] “In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1).” See also [0085] “At 818, the process 800 may include controlling movement of a vehicle (e.g., an autonomous vehicle). For example, the movement of the vehicle may be controlled based at least in part on the associated data. As a non-limiting example, the associated data may indicate a location of an obstacle in the environment of the vehicle. A trajectory and/or a route that avoids a collision between the vehicle and the obstacle may be determined based at least in part on the associated data. The vehicle may be controlled to move along the trajectory and/or route.” The examiner notes that the term associated data is the fused data as discloses in [0019] and [0047].) wherein the vehicle controller is configured to provide the at least one navigational command based at least partially on the fused instance (see at least Benemann Figure 8, step 818 “Control, based at least in part of the associated data, movement of a vehicle”). See also [0019] Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” See also [0024] “Consequently, the computing system of the vehicle may be able to improve its detection of objects (e.g., obstacles) and its trajectory and/or route planning, e.g., to control movement of the vehicle to avoid colliding with obstacles.” See also [0047] “In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1).” See also [0085] “At 818, the process 800 may include controlling movement of a vehicle (e.g., an autonomous vehicle). For example, the movement of the vehicle may be controlled based at least in part on the associated data. As a non-limiting example, the associated data may indicate a location of an obstacle in the environment of the vehicle. A trajectory and/or a route that avoids a collision between the vehicle and the obstacle may be determined based at least in part on the associated data. The vehicle may be controlled to move along the trajectory and/or route.”) Regarding claim 3, Benemann discloses the agricultural vehicle of claim 1, wherein the instructions, when executed by the at least one processor, cause the imaging controller to provide the image data to a neural network trained using agricultural object data (The examiner would like to point out that agricultural object data is a broad term that can be interpreted as any object that an agricultural vehicle could detect including other vehicles, trees, plants, ground, etc. Benemann teaches a neural network that is trained in [00113-0117] “In some instances, aspects of some or all of the components discussed herein can include any models, algorithms, and/or machine learning algorithms. For example, in some instances, the components in the memory 1118 (and the memory 1146, discussed below) can be implemented as a neural network…. As can be understood in the context of this disclosure, a neural network can utilize machine learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters.” Further, Benemann discloses that the vehicle may be an agricultural vehicle in [0029] “Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles.” The examiner notes that the vehicle can be an agricultural vehicle and objects that the vehicle detects and is capable of classifying (based on training) would be agricultural object data. Further [0107] teaches that that agricultural vehicle can classify the object by entity type including other vehicles, animals and trees. The other vehicles, animals and trees are interpreted as agricultural objects. See at least Benemann [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.).” Machine learning is predicting based on known properties learned from training data.) Regarding claim 4, Benemann discloses the agricultural vehicle of claim 3, wherein the neural network is selected from a plurality of neural networks trained on agricultural object data associated with an agricultural task (The examiner would like to point out that agricultural object data is a broad term that can be interpreted as any object that an agricultural vehicle could detect including other vehicles, trees, plants, ground, etc. Similarly, an agricultural task is any task that an agricultural vehicle is capable of performing including propulsion, steering, braking, etc. As explained above, Benemann teaches a neural network that is trained in [00113-0117] “In some instances, aspects of some or all of the components discussed herein can include any models, algorithms, and/or machine learning algorithms. For example, in some instances, the components in the memory 1118 (and the memory 1146, discussed below) can be implemented as a neural network…. As can be understood in the context of this disclosure, a neural network can utilize machine learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters.” Further, Benemann discloses that the vehicle may be an agricultural vehicle in [0029] “Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles.” The examiner notes that the vehicle can be an agricultural vehicle and objects that the vehicle detects and is capable of classifying (based on training) would be agricultural object data associated with an agricultural task. Further [0107] teaches that that agricultural vehicle can classify the object by entity type including other vehicles, animals and trees. The other vehicles, animals and trees are interpreted as agricultural objects. See at least Benemann [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.).” Machine learning is predicting based on known properties learned from training data.) Regarding claim 5, Benemann discloses the agricultural vehicle of claim 1, wherein the instructions, when executed by the at least one processor, cause the imaging controller to project the LiDAR data onto the image data value (see at least Benemann [0019] “Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” [0024] The techniques discussed herein can improve a functioning of a computing device in a number of ways. For example, the techniques discussed herein may include adding time data to individual scan lines of an image frame, which may allow a computing device to accurately align (or otherwise associate) pixels of the image frame with LIDAR points, e.g., to achieve accurate multi-modal sensor fusion, sensor calibration, 3D reconstruction, multi-modal calibration, and the like. [0047] As a non-limiting example, the sensor association component(s) 418 may associate, based at least in part on the time data of the scan line, at least a portion of the pixel data of the scan line with one or more LIDAR points 416 of the LIDAR data. In some cases, the association may indicate that the portion of the pixel data and the LIDAR point(s) were captured substantially contemporaneously (e.g., during a time period that satisfies a time threshold). In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1).” See also [0084]); Regarding claim 7, Benemann discloses the agricultural vehicle of claim 1, wherein the instructions, when executed by the at least one processor, cause the imaging controller to perform an instance segmentation operation on the image data (see at least Benemann, Figure 1, scan line of image frame 116, See also [0016-0017] “This disclosure is directed to techniques for adding time data to portions of an image at capture (e.g., scan lines of an image frame during a rolling shutter image capture)…[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture. The rolling shutter image capture may produce scan lines of an image frame. A scan line may include scan line data (e.g., pixel data) and end of line data. The end of line data may indicate an end of the scan line.” See also [0024] and [0046-0054] for more in depth discussion. See also [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.)…”) Regarding claim 11, Benemann discloses the agricultural vehicle of claim 1, wherein the instructions, when executed by the at least one processor, further cause the imaging controller to select a portion of the image data having a lateral position based at least partially on the relative angular value of the LiDAR data (see at least Benemann [0031] “ According to some implementations, a sensor 106 (e.g., an image sensor such as a rolling shutter image capture device) of the vehicle 102 may capture an image of a scene within a field of view associated with the sensor 106. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), such as in a rolling shutter image capture. FIG. 1 includes an example representation of a rolling shutter image capture 114 of a scene that is within a field of view of a sensor 106 of the vehicle 102. As indicated in this non-limiting example, the rolling shutter image capture 114 may be performed by horizontally scanning across the scene. The rolling shutter image capture may produce scan lines (e.g., scan line 1, scan line 2, . . . scan line n) of an image frame 116. The scan lines may include scan line data (e.g., pixel data 118). Furthermore, in some examples, the scan lines may be associated with, and/or may include, end of line data (e.g., as indicated in FIG. 2).” See also [0046-0054]. For example [0052] “In some examples, the sensor synchronization component(s) 512 may determine a first orientation of the first field of view 508 of the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second orientation of the second field of view 510 of the LIDAR sensor 504. In some examples, the respective orientations of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first orientation and the second orientation may be tracked relative to one another. The sensor synchronization component(s) 512 may use field of view orientation information (e.g., the first orientation, the second orientation, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514. In various examples, the second field of view 510 may move relative to the first field of view 508 during the rolling shutter image capture 518.” The examiner notes that the time threshold of Benemann is based on the relative angular value as discussed in [0046-0054].); Regarding claim 12, Benemann discloses the agricultural vehicle of claim 1, wherein the image data includes a plurality of slices of the CFOV captured within a scan of the LiDAR data (see at least Benemann [0031] “ According to some implementations, a sensor 106 (e.g., an image sensor such as a rolling shutter image capture device) of the vehicle 102 may capture an image of a scene within a field of view associated with the sensor 106. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), such as in a rolling shutter image capture. FIG. 1 includes an example representation of a rolling shutter image capture 114 of a scene that is within a field of view of a sensor 106 of the vehicle 102. As indicated in this non-limiting example, the rolling shutter image capture 114 may be performed by horizontally scanning across the scene. The rolling shutter image capture may produce scan lines (e.g., scan line 1, scan line 2, . . . scan line n) of an image frame 116. The scan lines may include scan line data (e.g., pixel data 118). Furthermore, in some examples, the scan lines may be associated with, and/or may include, end of line data (e.g., as indicated in FIG. 2).” See also [0046-0054]. For example [0052] “In some examples, the sensor synchronization component(s) 512 may determine a first orientation of the first field of view 508 of the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second orientation of the second field of view 510 of the LIDAR sensor 504. In some examples, the respective orientations of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first orientation and the second orientation may be tracked relative to one another. The sensor synchronization component(s) 512 may use field of view orientation information (e.g., the first orientation, the second orientation, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514. In various examples, the second field of view 510 may move relative to the first field of view 508 during the rolling shutter image capture 518.” The examiner notes that the time threshold of Benemann is based on the relative angular value as discussed in [0046-0054].); Regarding claim 13, Benemann discloses the agricultural vehicle of claim 1, wherein the instructions, when executed by the at least one processor, further cause the imaging controller to determine a distance of the fused instance to the agricultural vehicle and provide the distance to the vehicle controller (see at least Benemann [0019] and [0047] as cited above and [0071] “According to some examples, the image processing component(s) 602 may receive, as input, distance data 606 associated with one or more scan lines of the rolling shutter image capture. As a non-limiting example, the distance data 606 may be associated with LIDAR data captured by a LIDAR sensor. The LIDAR sensor may capture LIDAR data associated with the object, and the LIDAR data may include distance data 606 representing a distance between the LIDAR sensor and the object at a given time. The distance data 606 may be used to determine a distance between the image sensor and the object at a given time. The image processing component(s) 602 may modify, based at least in part on the distance data 606 and/or time data added to one or more of the scan lines of the image frame associated with the distorted image 604, the distorted image 604 to correct and/or compensate for the distortion effect(s) of the distorted image 604. The image processing component(s) 602 may output a corrected image 608 (e.g., image data associated with a corrected image). In some examples, the corrected image 608 may be distorted less than the distorted image 604, as the image processing component(s) 602 have corrected and/or compensated for the distortion effect(s) of the distorted image 604.” See also see at least Benemann [0019] Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” See also [0047] “In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1).” See also [0085] “At 818, the process 800 may include controlling movement of a vehicle (e.g., an autonomous vehicle). For example, the movement of the vehicle may be controlled based at least in part on the associated data. As a non-limiting example, the associated data may indicate a location of an obstacle in the environment of the vehicle. A trajectory and/or a route that avoids a collision between the vehicle and the obstacle may be determined based at least in part on the associated data. The vehicle may be controlled to move along the trajectory and/or route.” The examiner notes that the term associated data is the fused data as disclosed in [0019] and [0047].) Regarding claim 14, Benemann discloses the agricultural vehicle of claim 13, wherein the imaging controller further comprises instructions thereon that, when executed by the at least one processor, causes the imaging controller to record a location and orientation of the fused instance to a storage device (see at least Benemann [0083] “At 814, the process 800 may include determining an association between the measurement and at least a portion of the scan line. For example, the association may be determined based at least in part on the first time data and/or the second time data. In some examples, the measurement and the portion of the scan line may be associated based at least in part on a determination that a time difference between the first time data and the second time data satisfies a threshold time difference. For example, the threshold time difference may be set to a low value such that the measurement and the portion of the scan line are associated with one another if they are captured substantially contemporaneously. The association between the measurement and the portion of the scan line may be mapped in a database in some examples. In some examples, the measurement and the portion of the scan line may be associated with one another regardless of the time difference between the first time data and the second time data.”.) Regarding claim 15, Benemann discloses the agricultural vehicle of claim 13, wherein the imaging controller further comprises instructions thereon that, when executed by the at least one processor, causes the imaging controller to record a location and velocity of at least one moving object identified in the fused data (see at least Benemann [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.). In additional and/or alternative examples, the perception component 1122 can provide processed sensor data that indicates one or more characteristics associated with a detected entity (e.g., a tracked object) and/or the environment in which the entity is positioned. In some examples, characteristics associated with an entity can include, but are not limited to, an x-position (global and/or local position), a y-position (global and/or local position), a z-position (global and/or local position), an orientation (e.g., a roll, pitch, yaw), an entity type (e.g., a classification), a velocity of the entity, an acceleration of the entity, an extent of the entity (size), etc. Characteristics associated with the environment can include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc.” See also “[0111-0112] “The memory 1118 can further include one or more maps (not shown) that can be used by the vehicle 1102 to navigate within the environment. For the purpose of this discussion, a map can be any number of data structures modeled in two dimensions, three dimensions, or N-dimensions that are capable of providing information about an environment, such as, but not limited to, topologies (such as intersections), streets, mountain ranges, roads, terrain, and the environment in general. In some instances, a map can include, but is not limited to: texture information (e.g., color information (e.g., RGB color information, Lab color information, HSV/HSL color information), and the like), intensity information (e.g., LIDAR information, RADAR information, and the like); spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity)), reflectivity information (e.g., specularity information, retroreflectivity information, BRDF information, BSSRDF information, and the like). In one example, a map can include a three-dimensional mesh of the environment. In some instances, the map can be stored in a tiled format, such that individual tiles of the map represent a discrete portion of an environment and can be loaded into working memory as needed. In at least one example, the one or more maps can include at least one map (e.g., images and/or a mesh). In some examples, the vehicle 1102 can be controlled based at least in part on the maps. That is, the maps can be used in connection with the localization component 1120, the perception component 1122, and/or the planning component 1124 to determine a location of the vehicle 1102, identify objects in an environment, and/or generate routes and/or trajectories to navigate within an environment….[0112] In some examples, the one or more maps can be stored on a remote computing device(s) (such as the computing device(s) 1140) accessible via network(s) 1142. In some examples, multiple maps can be stored based on, for example, a characteristic (e.g., type of entity, time of day, day of week, season of the year, etc.). Storing multiple maps can have similar memory requirements, but increase the speed at which data in a map can be accessed.”) Regarding claim 16, Benemann discloses a method of controlling an agricultural vehicle, the method comprising: receiving LiDAR data from a LiDAR sensor, the LiDAR data including an angular value within a LiDAR field of view (see at least Benemann [0019], “In some examples, a LIDAR sensor of the vehicle may capture LIDAR data (e.g., LIDAR points) within a second field of view of the LIDAR sensor which may at least partially overlap with the first field of view of the image sensor.” See also [0023] “Furthermore, the computing system may determine a second orientation of the second field of view and/or a second pose associated with the LIDAR sensor. In some examples, the respective orientations and/or the respective poses associated with the image sensor and/or the LIDAR sensor may be tracked. According to some examples, the orientations and/or the poses may be tracked relative to one another. The computing system may use field of view orientation information and/or pose information as an input for causing the image sensor to initiate the rolling shutter image capture of the first field of view, e.g., such that at least a first portion of the first field of view (associated with the image sensor) overlaps at least a second portion of the second field of view (associated with the LIDAR sensor) in accordance with the synchronization condition(s). …See also [0050-0054] for discussion regarding angular value of LFOV “[0050] According to some examples, the second field of view 510 of the LIDAR sensor 504 may move relative to the first field of view 508 of the image sensor 502. In some examples, the second field of view 510 may be rotatable (e.g., about one or more axes). In some non-limiting examples, the second field of view 510 may be rotatable 360 degrees. In some examples, the amount of rotation may be less than 360 degrees.”. See also [0054] “In some examples, the sensor synchronization component(s) 512 may determine a first pose of the image sensor 502. The first pose may include a first position and/or a first orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second pose of the LIDAR sensor 504. The second pose may include a second position and/or a second orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the LIDAR sensor 504. In some examples, the respective poses of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first pose and the second pose may be tracked relative to one another. The sensor synchronization component(s) 512 may use the pose information (e.g., the first pose, the second pose, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514.”); determining a relative angular value of the LiDAR data with respect to a camera field of view (see at least Benemann [0050-0054], [0054] “In some examples, the sensor synchronization component(s) 512 may determine a first pose of the image sensor 502. The first pose may include a first position and/or a first orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second pose of the LIDAR sensor 504. The second pose may include a second position and/or a second orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the LIDAR sensor 504. In some examples, the respective poses of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first pose and the second pose may be tracked relative to one another. The sensor synchronization component(s) 512 may use the pose information (e.g., the first pose, the second pose, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514.”); receiving image data from a camera sensor (see at least Benemann [0017], “[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture.” See also Figure 1, sensors 106 including camera. See also [0105] [0117] for showing the image sensor may be a camera.) segmenting the image data (see at least Benemann, Figure 1, scan line of image frame 116, See also [0016-0017] “This disclosure is directed to techniques for adding time data to portions of an image at capture (e.g., scan lines of an image frame during a rolling shutter image capture)…[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture. The rolling shutter image capture may produce scan lines of an image frame. A scan line may include scan line data (e.g., pixel data) and end of line data. The end of line data may indicate an end of the scan line.” See also [0024] and [0046-0054] for more in depth discussion. See also [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.)…”) fusing the LiDAR data and the image data based at least partially on the relative angular value (see at least Benemann [0019] “Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” [0024] The techniques discussed herein can improve a functioning of a computing device in a number of ways. For example, the techniques discussed herein may include adding time data to individual scan lines of an image frame, which may allow a computing device to accurately align (or otherwise associate) pixels of the image frame with LIDAR points, e.g., to achieve accurate multi-modal sensor fusion, sensor calibration, 3D reconstruction, multi-modal calibration, and the like. [0047] As a non-limiting example, the sensor association component(s) 418 may associate, based at least in part on the time data of the scan line, at least a portion of the pixel data of the scan line with one or more LIDAR points 416 of the LIDAR data. In some cases, the association may indicate that the portion of the pixel data and the LIDAR point(s) were captured substantially contemporaneously (e.g., during a time period that satisfies a time threshold). In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1). The examiner notes that the time threshold of Benemann is based on the relative angular value as discussed in [0046-0054].); creating a fused instance including depth information (see at least Benemann [0019] and [0047] as cited above and [0071] “According to some examples, the image processing component(s) 602 may receive, as input, distance data 606 associated with one or more scan lines of the rolling shutter image capture. As a non-limiting example, the distance data 606 may be associated with LIDAR data captured by a LIDAR sensor. The LIDAR sensor may capture LIDAR data associated with the object, and the LIDAR data may include distance data 606 representing a distance between the LIDAR sensor and the object at a given time. The distance data 606 may be used to determine a distance between the image sensor and the object at a given time. The image processing component(s) 602 may modify, based at least in part on the distance data 606 and/or time data added to one or more of the scan lines of the image frame associated with the distorted image 604, the distorted image 604 to correct and/or compensate for the distortion effect(s) of the distorted image 604. The image processing component(s) 602 may output a corrected image 608 (e.g., image data associated with a corrected image). In some examples, the corrected image 608 may be distorted less than the distorted image 604, as the image processing component(s) 602 have corrected and/or compensated for the distortion effect(s) of the distorted image 604.”); and providing at least the fused instance to a vehicle controller (see at least Benemann [0019] Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” See also [0047] “In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1).” See also [0085] “At 818, the process 800 may include controlling movement of a vehicle (e.g., an autonomous vehicle). For example, the movement of the vehicle may be controlled based at least in part on the associated data. As a non-limiting example, the associated data may indicate a location of an obstacle in the environment of the vehicle. A trajectory and/or a route that avoids a collision between the vehicle and the obstacle may be determined based at least in part on the associated data. The vehicle may be controlled to move along the trajectory and/or route.” The examiner notes that the term associated data is the fused data as discloses in [0019] and [0047].). Regarding claim 17, Benemann discloses the method of claim 16, further comprising collecting a plurality of portions of image data and compiling the image data before fusing the LiDAR data with the image data (see at least Benemann [0016] “This disclosure is directed to techniques for adding time data to portions of an image at capture (e.g., scan lines of an image frame during a rolling shutter image capture).” [0084] “At 816, the process 800 may include associating (e.g., fusing) at least part of the first data with at least part of the second data to produce associated data. For example, associating the data may be based at least in part on the association between the measurement and the scan line. In some examples, at least part of the first data may be associated with at least part of the second data based at least in part on a determination that a time difference between the first time data and the second time data satisfies a threshold time difference.” The examiner notes “that at least one part” implies a plurality of portions can be used. See also [0071] “According to some examples, the image processing component(s) 602 may receive, as input, distance data 606 associated with one or more scan lines of the rolling shutter image capture. As a non-limiting example, the distance data 606 may be associated with LIDAR data captured by a LIDAR sensor. The LIDAR sensor may capture LIDAR data associated with the object, and the LIDAR data may include distance data 606 representing a distance between the LIDAR sensor and the object at a given time. The distance data 606 may be used to determine a distance between the image sensor and the object at a given time. The image processing component(s) 602 may modify, based at least in part on the distance data 606 and/or time data added to one or more of the scan lines of the image frame associated with the distorted image 604, the distorted image 604 to correct and/or compensate for the distortion effect(s) of the distorted image 604. The image processing component(s) 602 may output a corrected image 608 (e.g., image data associated with a corrected image). In some examples, the corrected image 608 may be distorted less than the distorted image 604, as the image processing component(s) 602 have corrected and/or compensated for the distortion effect(s) of the distorted image 604.” See also [0070] ). Regarding claim 18, Benemann discloses the method of claim 17, wherein the plurality of portions of image data are selected from a plurality of frames of image data (see at least Benemann [0070] “In some examples, the image processing component(s) 602 may be configured to perform image processing on images. In example 600, the image processing component(s) 602 receives, as input, a distorted image 604 (e.g., image data associated with a distorted image). In some examples, the distorted image 604 may be associated with an image frame having time data added with individual scan lines, e.g., by the timestamp component(s) 110, which may also be useful to perform image rectification, sensor calibration, cross-modal calibration, 3D reconstruction, tracking, feature extraction, multi-modal sensor fusion, and the like. In various examples, the distorted image 604 may include one or more distortion effects, e.g., distortion effects caused by motion associated with a rolling shutter image capture. As non-limiting examples, the distortion effect(s) may include wobble, skew, spatial aliasing, and/or temporal aliasing. The distorted image 604 may include the distortion effect(s) because the image data may be obtained via a rolling shutter image capture of an object, and the rolling shutter image capture may be performed by a moving image sensor. That is, each of the scan lines may be associated with a respective portion of the object being imaged at one or more respective distances (e.g., in view of relative motion between the image sensor and the object). See also [0024] “The computing system of the vehicle, for example, may generate a more accurate representation of the environment of the vehicle by making such associations using time data associated with individual scan lines as compared to using time data associated with an image frame as a whole. Consequently, the computing system of the vehicle may be able to improve its detection of objects (e.g., obstacles) and its trajectory and/or route planning, e.g., to control movement of the vehicle to avoid colliding with obstacles.” See also [0016], [0071] and [0084] as cited above with respect to claim 17.) Regarding claim 19, Benemann discloses a device for controlling an agricultural vehicle (see at least Benemann [0029] “In some examples, the vehicle 102 may be an automobile having four wheels and respective tires for each of the wheels. Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles. The vehicle 102 may be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, or any combination thereof.”), the device comprising: a camera sensor having a camera field of view (CFOV), the camera sensor configured to be operably coupled to an agricultural vehicle (see at least Benemann Figure 1, sensors 106 including camera. See also Figure 5A, first field of view 508 from image sensor 502. See also [0105] “In the illustrated example, the vehicle 1102 is an autonomous vehicle; however, the vehicle 1102 could be any other type of vehicle, or any other system having at least an image capture device (e.g., a camera enabled smartphone). [0117] “As another example, the camera sensors can include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle 1102.” See also [0049] “In some examples, the example 500a may include an image sensor 502, a LIDAR sensor 504, and/or a computing system 506 (e.g., the vehicle computing system 108 described with reference to FIG. 1). The image sensor 502 may have a first field of view 508. The LIDAR sensor 504 may have a second field of view 510. In various examples, a field of view may be associated with the portion of the environment sensed by a sensor at a given time. In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.”See also [0051-0052]) a light detection and ranging (LiDAR) sensor having a LiDAR field of view (LFOV), the LiDAR sensor configured to be operably coupled to the agricultural vehicle (see at least Benemann Figure 1, sensors 106 including LIDAR. See also Figure 5A, second field of view 508 from LIDAR sensor 504. [0117] “For instance, the LIDAR sensors can include individual LIDAR sensors located at the corners, front, back, sides, and/or top of the vehicle 1102.” See also [0049] “In some examples, the example 500a may include an image sensor 502, a LIDAR sensor 504, and/or a computing system 506 (e.g., the vehicle computing system 108 described with reference to FIG. 1). The image sensor 502 may have a first field of view 508. The LIDAR sensor 504 may have a second field of view 510. In various examples, a field of view may be associated with the portion of the environment sensed by a sensor at a given time. In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.”See also [0051-0052]); and an imaging controller in data communication with the camera sensor and the LiDAR sensor(see at least Figure 5A, computing system 506. See at least [0049] “In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.” See also “[0069] According to some examples, the example 600 may include one or more image processing components 602. In some examples, the image processing component(s) 602 may reside in the vehicle computing system 108 (described herein with reference to FIG. 1), the computing system 506 (described herein with reference to FIG. 5A), the vehicle computing device 1104 (described herein with reference to FIG. 11), and/or the computing devices 1140 (described herein with reference to FIG. 11).” See also [0110] “ In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive module(s) 1114 and/or other components of the vehicle 1102.”), the imaging controller comprising: at least one processor (see at least [0126-0127] “The computing device(s) 1140 can include processor(s) 1144 and a memory 1146 storing a maps(s) component 1148, the timestamp component(s) 110, the sensor association component(s) 418, the sensor synchronization component(s) 512, and/or the image processing component(s) 602…[0127] The processor(s) 1116 of the vehicle 1102 and the processor(s) 1144 of the computing device(s) 1140 can be any suitable processor capable of executing instructions to process data and perform operations as described herein.”); and at least one non-transitory computer-readable storage medium having instructions store thereon that, when executed by the at least one processor cause the imaging controller to (see at least [0127-0128] “0127] The processor(s) 1116 of the vehicle 1102 and the processor(s) 1144 of the computing device(s) 1140 can be any suitable processor capable of executing instructions to process data and perform operations as described herein…t 0128] Memory 1118 and 1146 are examples of non-transitory computer-readable media. The memory 1118 and 1146 can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems.): receive LiDAR data from the LiDAR sensor, the LiDAR data including an angular value within the LFOV (see at least Benemann [0019], “In some examples, a LIDAR sensor of the vehicle may capture LIDAR data (e.g., LIDAR points) within a second field of view of the LIDAR sensor which may at least partially overlap with the first field of view of the image sensor.” See also [0023] “Furthermore, the computing system may determine a second orientation of the second field of view and/or a second pose associated with the LIDAR sensor. In some examples, the respective orientations and/or the respective poses associated with the image sensor and/or the LIDAR sensor may be tracked. According to some examples, the orientations and/or the poses may be tracked relative to one another. The computing system may use field of view orientation information and/or pose information as an input for causing the image sensor to initiate the rolling shutter image capture of the first field of view, e.g., such that at least a first portion of the first field of view (associated with the image sensor) overlaps at least a second portion of the second field of view (associated with the LIDAR sensor) in accordance with the synchronization condition(s). …See also [0050-0054] for discussion regarding angular value of LFOV “[0050] According to some examples, the second field of view 510 of the LIDAR sensor 504 may move relative to the first field of view 508 of the image sensor 502. In some examples, the second field of view 510 may be rotatable (e.g., about one or more axes). In some non-limiting examples, the second field of view 510 may be rotatable 360 degrees. In some examples, the amount of rotation may be less than 360 degrees.”. See also [0054] “In some examples, the sensor synchronization component(s) 512 may determine a first pose of the image sensor 502. The first pose may include a first position and/or a first orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second pose of the LIDAR sensor 504. The second pose may include a second position and/or a second orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the LIDAR sensor 504. In some examples, the respective poses of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first pose and the second pose may be tracked relative to one another. The sensor synchronization component(s) 512 may use the pose information (e.g., the first pose, the second pose, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514.”) determine a relative angular value of the LiDAR data with respect to the CFOV (see at least Benemann [0050-0054], [0054] “In some examples, the sensor synchronization component(s) 512 may determine a first pose of the image sensor 502. The first pose may include a first position and/or a first orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second pose of the LIDAR sensor 504. The second pose may include a second position and/or a second orientation (e.g., an x-, y-, z-position, roll, pitch, and/or yaw) associated with the LIDAR sensor 504. In some examples, the respective poses of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first pose and the second pose may be tracked relative to one another. The sensor synchronization component(s) 512 may use the pose information (e.g., the first pose, the second pose, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514.”); receive image data from the camera sensor (see at least Benemann [0017], “[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture.” See also Figure 1, sensors 106 including camera. See also [0105] [0117] for showing the image sensor may be a camera.) segment the image data (see at least Benemann, Figure 1, scan line of image frame 116, See also [0016-0017] “This disclosure is directed to techniques for adding time data to portions of an image at capture (e.g., scan lines of an image frame during a rolling shutter image capture)…[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture. The rolling shutter image capture may produce scan lines of an image frame. A scan line may include scan line data (e.g., pixel data) and end of line data. The end of line data may indicate an end of the scan line.” See also [0024] and [0046-0054] for more in depth discussion. See also [0107] “In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.)…”) fuse the LiDAR data and the image data based at least partially on the relative angular value (see at least Benemann [0019] “Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” [0024] The techniques discussed herein can improve a functioning of a computing device in a number of ways. For example, the techniques discussed herein may include adding time data to individual scan lines of an image frame, which may allow a computing device to accurately align (or otherwise associate) pixels of the image frame with LIDAR points, e.g., to achieve accurate multi-modal sensor fusion, sensor calibration, 3D reconstruction, multi-modal calibration, and the like. [0047] As a non-limiting example, the sensor association component(s) 418 may associate, based at least in part on the time data of the scan line, at least a portion of the pixel data of the scan line with one or more LIDAR points 416 of the LIDAR data. In some cases, the association may indicate that the portion of the pixel data and the LIDAR point(s) were captured substantially contemporaneously (e.g., during a time period that satisfies a time threshold). In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1). The examiner notes that the time threshold of Benemann is based on the relative angular value as discussed in [0046-0054].); create a fused instance including depth information (see at least Benemann [0019] and [0047] as cited above and [0071] “According to some examples, the image processing component(s) 602 may receive, as input, distance data 606 associated with one or more scan lines of the rolling shutter image capture. As a non-limiting example, the distance data 606 may be associated with LIDAR data captured by a LIDAR sensor. The LIDAR sensor may capture LIDAR data associated with the object, and the LIDAR data may include distance data 606 representing a distance between the LIDAR sensor and the object at a given time. The distance data 606 may be used to determine a distance between the image sensor and the object at a given time. The image processing component(s) 602 may modify, based at least in part on the distance data 606 and/or time data added to one or more of the scan lines of the image frame associated with the distorted image 604, the distorted image 604 to correct and/or compensate for the distortion effect(s) of the distorted image 604. The image processing component(s) 602 may output a corrected image 608 (e.g., image data associated with a corrected image). In some examples, the corrected image 608 may be distorted less than the distorted image 604, as the image processing component(s) 602 have corrected and/or compensated for the distortion effect(s) of the distorted image 604.”); and provide at least the fused instance to the vehicle controller (see at least Benemann [0019] Furthermore, the computing system may associate (e.g., fuse) image data with LIDAR data based at least in part on the association of the portion of the LIDAR data with the portion of the scan line, producing associated data. The associated data may be used as an input for controlling movement of the vehicle in some examples.” See also [0047] “In some examples, the sensor association component(s) 418 may associate (e.g., fuse) a portion of the image data with a portion of the LIDAR data to produce associated data 420. For example, the association may be based at least in part on the association between the portion of the pixel data and the LIDAR point(s). In some examples, the associated data 420 may be used to control movement of a vehicle (e.g., the vehicle 102 described with reference to FIG. 1).” See also [0085] “At 818, the process 800 may include controlling movement of a vehicle (e.g., an autonomous vehicle). For example, the movement of the vehicle may be controlled based at least in part on the associated data. As a non-limiting example, the associated data may indicate a location of an obstacle in the environment of the vehicle. A trajectory and/or a route that avoids a collision between the vehicle and the obstacle may be determined based at least in part on the associated data. The vehicle may be controlled to move along the trajectory and/or route.” The examiner notes that the term associated data is the fused data as discloses in [0019] and [0047].) Regarding claim 20, Benemann discloses the device of claim 19 wherein the imaging controller further comprises a communication interface configured to transmit the fused instance to a vehicle controller (see at least Figure 5A, computing system 506. See at least [0049] “In some examples, the computing system 506 may include sensor synchronization component(s) 512, synchronization condition(s) 514, timestamp component(s) 110, and/or time data 112.” See also “[0069] According to some examples, the example 600 may include one or more image processing components 602. In some examples, the image processing component(s) 602 may reside in the vehicle computing system 108 (described herein with reference to FIG. 1), the computing system 506 (described herein with reference to FIG. 5A), the vehicle computing device 1104 (described herein with reference to FIG. 11), and/or the computing devices 1140 (described herein with reference to FIG. 11).” See also [0110] “ In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive module(s) 1114 and/or other components of the vehicle 1102.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benemann in view of Christiansen et al. (GB2606740A, hereinafter “Christiansen”). Regarding claim 2, Benemann discloses the agricultural vehicle of claim 1, further comprising controlling the agricultural vehicle based at least partially on the fused instance(See at least Benemann [0029] In some examples, the vehicle 102 may be an automobile having four wheels and respective tires for each of the wheels. Other types and configurations of vehicles are contemplated, such as, for example, vans, sport utility vehicles, cross-over vehicles, trucks, buses, agricultural vehicles, and construction vehicles. [0024] Consequently, the computing system of the vehicle may be able to improve its detection of objects (e.g., obstacles) and its trajectory and/or route planning, e.g., to control movement of the vehicle to avoid colliding with obstacles.” ). However, Benemann but does not explicitly disclose that the agricultural vehicle includes at least one agricultural implement configured to manipulate an agricultural product, and wherein the vehicle controller is further configured to provide an implement command to the agricultural implement based at least partially on the fused instance. Christensen teaches the agricultural vehicle of claim 1, further comprising at least one agricultural implement configured to manipulate an agricultural product, and wherein the vehicle controller is further configured to provide an implement command to the agricultural implement based at least partially on the fused instance (see at least Christensen pages 11-12 “The sensor data (image data) received from the camera 30b is analysed by the processor 104 to identify residue material pieces within the image data and using the position information obtained through analysis of the sensor data from LIDAR unit 30a, a characteristic of each residue material piece is determined. Here, the characteristic determined by the processor 104 comprises a length of residue material piece(s) identified in the sensor data, determined in the manner described hereinbelow. Based on the determined characteristic, one or more control signals are output from the processor 104 for controlling operation of one or more systems of the agricultural machine in dependence on the determined characteristic….Output 108 of the controller 102 is operatively coupled to the chopper assembly 29, and is used to output control signals 113 generated by processor 104 for controlling operation of the chopper assembly 29 in dependence on the determined characteristic(s) of the residue material piece(s). As discussed below, this can involve controlling an operating speed and/or frequency of the chopper assembly 29 to control the length(s) to which the residue material is cut when passing through the chopper assembly 29….In an extension of the control system 100 and combine 10 embodying the control system 100, the processor 104 may be configured to generate and output one or more control signals for controlling one or more further operating parameters of the combine 10 or one or more systems thereof. For example, this may include controlling a forward speed of the combine 10, or controlling an operating speed/parameter of one or more sub-systems of the combine, including the header 12, conveyors 14, crop processing / cleaning apparatus, etc. Each of these controls may affect the speed and or density of flow of residue material supplied to the chopper assembly 29 and hence a cutting length obtained by the chopper assembly 29.”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Benemann with the teaching of Christiansen to use the fused instance of the object to control an implement of an agricultural vehicle with a reasonable expectation of success, because as Christensen teaches the fused sensor information allows the system to control the chopper to cut to a desired length and to better aid in uniformness of the spread material (see at least Christensen page 5 “The processor 104 may determine from the multiple residue material pieces, a maximum observed length of residue material pieces. If this length is greater than expected or desired, one or more control measures may be taken, e.g. by controlling a feed rate of material into the chopper assembly 29 (here slowing / reducing the feed rate) to ensure the material is engaged by the chopper assembly 29 for longer to provide greater cutting.” See also page 1) Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benemann in view of Yin et al. (US-20230388481-A1, hereinafter “Yin”). Regarding claim 6, Benemann discloses the agricultural vehicle of claim 1, wherein there is a lateral width of the CFOV and a lateral width of the LFOV and discloses maximizing the overlap of the respective fields of view (see at least Benemann [0019] “In some examples, a LIDAR sensor of the vehicle may capture LIDAR data (e.g., LIDAR points) within a second field of view of the LIDAR sensor which may at least partially overlap with the first field of view of the image sensor.” See also [0022-0023] “As non-limiting examples, the synchronization condition(s) may include an amount of overlap between the first field of view (associated with the image sensor) and the second field of view (associated with the LIDAR sensor), and/or an overlap between a first particular portion of the first field of view and a second particular portion of the second field of view, etc. As a non-limiting example, capturing of image data may be triggered such that the center of the image is captured substantially simultaneously with a spinning LIDAR sensor aligning with a field of view of the image sensor…. The computing system may use field of view orientation information and/or pose information as an input for causing the image sensor to initiate the rolling shutter image capture of the first field of view, e.g., such that at least a first portion of the first field of view (associated with the image sensor) overlaps at least a second portion of the second field of view (associated with the LIDAR sensor) in accordance with the synchronization condition(s). In at least some examples, such initialization may be timed so as to optimize (e.g., maximize) an overlap of the respective fields of view.” See also [0051] “s non-limiting examples, the synchronization condition(s) 514 may include an amount of overlap between the first field of view 508 and the second field of view 510, and/or an overlap between a first particular portion of the first field of view 508 and a second particular portion of the second field of view 510, etc. In at least some examples, the synchronization may be such that a capturing the center scan line in an image corresponds with LIDAR data capture substantially directed to a field of view of the image sensor. In such an example, the amount of LIDAR data which is associated with the same region of an environment associated with the image data is optimized (e.g., maximized).” The examiner notes that maximizing the region would include the lateral widths of the fields of view to be equal. ) However, Benemann does not explicitly disclose wherein a lateral width of the CFOV and a lateral width of the LFOV are equal. Yin teaches wherein a lateral width of the CFOV and a lateral width of the LFOV are equal (see at least Yin Figure 5B and [0102] “The LiDAR field of view 512 may correspond to a sensing area of the LiDAR sensor 202b at the phase angle 510. In certain cases, the sensing area can be a maximum sensing area of the LiDAR sensor 202b.Depending on the camera and LiDAR systems used, the camera field of view 508 may be the same or different than the LiDAR field of view 512. Depending on the arrangement of the camera and LiDAR systems used, the camera center line 506 may be aligned with the phase angle 510 or angled at an offset.”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Benemann with the teaching of Yin, to choose to use the same FOV of the camera and the LIDAR with a reasonable expectation of success, because as Yin teaches that the camera field of view and the lidar field of view can be the same as each other or they can be different and thus there a finite number of identified predictable potential solutions. That is, it is obvious to try either the same or different FOV as taught by Yin to provide the maximum overlap as desired by Benemann. One or ordinary skill in the art could have pursued the known potential options (a FOV being the same or different) with a reasonable expectation of success. If this leads to anticipated success, it is likely product of innovation, but of ordinary skill and common sense. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benemann in view of Li et al. (US-20190180467-A1, hereinafter “Li”). Regarding claim 8, Benemann discloses the agricultural vehicle of claim 1, wherein the instructions, when executed by the at least one processor, further cause the imaging controller to cause the camera sensor to collect the image data based at least partially on a frame [rate] of the LiDAR sensor (See at least Benemann [0048] FIG. 5A is a schematic diagram illustrating an example 500a of synchronizing rolling shutter image capture of an image sensor field of view with LIDAR data capture of a LIDAR sensor field of view, in accordance with examples of the disclosure. In some examples, the example 500a may include one or multiple features, components, and/or functionality of examples described herein with reference to FIGS. 1-4 and 5B-11.” See also [0051] “ In various examples, the sensor synchronization component(s) 512 may facilitate the synchronization of image data capture (e.g., by the image sensor 502) with LIDAR data capture (e.g., by the LIDAR sensor 504). For example, the sensor synchronization component(s) 512 may trigger the image sensor 502 to perform a rolling shutter image capture 518 of a scene during a time period in which the LIDAR sensor 504 is capturing LIDAR data of at least a portion of the scene. By synchronizing sensor data capture in this manner, the computing system 506 may be able to accurately associate (e.g., temporally and/or spatially) image data with LIDAR data…”) However, Benemann does not explicitly state that the synchronization is based on the frame rate of the LIDAR sensor. Li teaches that the synchronization is based on the frame rate of the LIDAR sensor (see at least Figure 16 and [0091] “In some embodiments, the LiDAR image and the radar image may be fused to generate a compensated image. Detailed methods regarding the fusion of the LiDAR image and the radar image may be found elsewhere in present disclosure (See, e.g., FIG. 15 and the descriptions thereof). In some embodiments, the camera 410, the LiDAR device 420 and the radar device 430 may work concurrently or individually. In a case that they are working individually at different time frame rates, a synchronization method may be employed. Detailed method regarding the synchronization of the frames of the camera 410, the LiDAR device 420 and/or the radar device 430 may be found elsewhere in the present disclosure (See e.g., FIG. 16 and the descriptions thereof).” and [0153] FIG. 16 is a schematic diagram of a synchronization between camera, LiDAR device, and/or radar device according to some embodiments of the present disclosure. As shown in FIG. 16, the frame rates of a camera (e.g., camera 410), a LiDAR device (e.g., LiDAR device 420) and a radar device (e.g., radar device 430) are different. Assuming that the camera, the LiDAR device and the radar device start to work simultaneously at a first time frame T1, a camera image, a LiDAR point cloud image, and a radar image may be generated roughly at the same time (e.g., synchronized). However, the subsequent images are not synchronized due to the different frame rates. In some embodiments, a device with slowest frame rate among the camera, the LiDAR device, and the radar device may be determined (In the example of FIG. 16, it's the camera). The control unit 150 may record each of the time frames of the camera images that camera captured and may search for other LiDAR images and radar images that are close to the time of each of the time frames of the camera images. For each of the time frames of the camera images, a corresponding LiDAR image and a corresponding radar image may be obtained. For example, a camera image 1610 is obtained at T2, the control unit 150 may search for a LiDAR image and a radar image that are closest to T2 (e.g., the LiDAR image 1620 and radar image 1630). The camera image and the corresponding LiDAR image and radar image are extracted as a set. The three images in a set is assumed to be obtained at the same time and synchronized.” ). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Benemann with the teaching of Li to cause the camera to collect the image data based on the frame rate of the LIDAR sensor, with a reasonable expectation of success, because as Li teaches this allows for synchronization of the images for fusion resulted in more accurate information for navigation (see [0003] and [0152-0153]). Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benemann in view of Zink et al. (US-20180173992-A1, hereinafter “Zink”). Regarding claims 9 and 10, Benemann discloses the agricultural vehicle of claim 1, wherein the camera sensor includes [a photoreceptor array], and the image data is collected from a portion of the [photoreceptor array] that is less than an entire area of [the photoreceptor array], wherein the portion is based at least partially on an angular position of the LiDAR sensor during exposure of the camera sensor (see at least Benemann [0017], “[0017] In some examples, an image sensor of a vehicle (e.g., an autonomous vehicle) may capture an image of a scene within a first field of view of the image sensor. In some examples, the image capture may be performed by scanning across the scene (e.g., vertically or horizontally) to capture a portion of the scene (e.g., a first scan line) before capturing another portion of the scene (e.g., a second scan line), and so on, such as in a rolling shutter image capture.” See also Figure 1, sensors 106 including camera. See also [0105] [0117] for showing the image sensor may be a camera.” See also [0050-0054] “[0052] In some examples, the sensor synchronization component(s) 512 may determine a first orientation of the first field of view 508 of the image sensor 502. Furthermore, the sensor synchronization component(s) 512 may determine a second orientation of the second field of view 510 of the LIDAR sensor 504. In some examples, the respective orientations of the image sensor 502 and/or the LIDAR sensor 504 may be tracked. According to some examples, the first orientation and the second orientation may be tracked relative to one another. The sensor synchronization component(s) 512 may use field of view orientation information (e.g., the first orientation, the second orientation, etc.) as an input for causing the image sensor 502 to initiate the rolling shutter image capture 518 of the first field of view 508, e.g., such that at least a first portion of the first field of view 508 overlaps at least a second portion of the second field of view 510 in accordance with the synchronization condition(s) 514. In various examples, the second field of view 510 may move relative to the first field of view 508 during the rolling shutter image capture 518…[0053] According to some implementations, the sensor synchronization component(s) 512 may time the rolling shutter image capture 518 such that at least a portion of the scan lines (e.g., scan line 1, scan line 2, . . . scan line n) of an image frame associated with a scene are captured while the LIDAR sensor 504 is capturing LIDAR data associated with at least a portion of the scene (e.g., at least a portion of the scene is within the second field of view 510 of the LIDAR sensor 504). In some examples, the sensor synchronization component(s) 512 may time the rolling shutter image capture 518 such that a majority of the scan lines of the image frame are captured during a time period associated with the LIDAR sensor 504 capturing the second field of view 510 of at least a portion of the scene, e.g., as indicated in FIG. 5A… [0053] According to some implementations, the sensor synchronization component(s) 512 may time the rolling shutter image capture 518 such that at least a portion of the scan lines (e.g., scan line 1, scan line 2, . . . scan line n) of an image frame associated with a scene are captured while the LIDAR sensor 504 is capturing LIDAR data associated with at least a portion of the scene (e.g., at least a portion of the scene is within the second field of view 510 of the LIDAR sensor 504).” See at least [0046-0054] for the angular value.). Benemann does not explicitly state that there is a photoreceptor array however, Zink teaches that a photoreceptor array is convention in a camera. (see at least Zink Figure 3 and [0069] “FIG. 3 further illustrates the limitations of conventional video camera 320 and is provided to explain its operation and limitations. As shown in FIG. 3, a video camera 320 senses rays of light 305, 310 reflected from an object 300 in an environment. A lens 315 focuses the rays of light onto a photoreceptor array 325 and the camera 320 may include filters for filtering the light as well. The photoreceptor array 325 converts the light energy into a spatial digital image. As a result of camera operation, a temporal sequence of images 330 representing changes in the environment is output. This is the conventional operation of video cameras in general which results in a temporal sequence of spatial images with no mechanism to differentiate subsequent analysis of image data included in that sequence of frames.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Benemann with the camera of Zink having photoreceptors, because as Zink teaches the camera is a conventional camera for object detection (see Zink abstract and Figure 6). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Christiansen US-20230113645-A1 and Christiansen US-20230114174-A1 are cited for showing claim limitations and would be available under 102(a)(1) unless an exception applied. Ichida US-20240140450-A1 is cited for showing the CFOV and the LFOV being equal. Martin US-20250123397-A1, Gu US-12360247-B1, Omar US-20240312058-A1 and Banerjee US-20200174130-A1 are all cited for showing lidar and camera fusion including teaching determining the relative angle of the lidar and segmentation and are relevant to at least the independent claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER M. ANDA whose telephone number is (571)272-5042. The examiner can normally be reached Monday-Friday 8:30 am-5pm MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER M ANDA/Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Oct 21, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602956
MONITOR PERFORMANCE OF ELECTRIC VEHICLE COMPONENTS USING AUDIO ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12600182
SELF PROPELLED TRAILER SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12600179
METHOD OF DETERMINING A LEFT-OR-RIGHT SIDE INSTALLATION POSITION OF A TRAILER WHEEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602992
DYNAMIC SPEED LIMIT FOR VEHICLES AND AUTONOMOUS VEHICLES
2y 5m to grant Granted Apr 14, 2026
Patent 12602060
INTELLIGENT OBSTACLE DETECTION SYSTEM FOR UNMANNED MINE VEHICLE
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+29.3%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month