Prosecution Insights
Last updated: April 19, 2026
Application No. 18/092,866

SYSTEMS AND METHODS FOR SCANNING A REGION OF INTEREST USING A LIGHT DETECTION AND RANGING SCANNER

Non-Final OA §102§103§112
Filed
Jan 03, 2023
Examiner
NAPIER, JAMES WILBURN
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Innovusion, Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
13
Total Applications
across all art units

Statute-Specific Performance

§103
55.0%
+15.0% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of the Claims 1. This action is in response to the applicant’s filing on January 3, 2023. Claims 1-28 are pending. Drawings 2. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character 713 has been used to designate two different ROIs in Fig. 7A. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections – 35 USC § 102 3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-2, 4-5, 11,13, 15, 17, 20-24, & 28 are rejected under 35 U.S.C. 102 as being unpatentable over Agrawal et al (US 20210099643 A1), hereinafter Agrawal. 5. Regarding Claim 1: Agrawal teaches, A LiDAR scanning system having reconfigurable regions-of-interest (ROIs), comprising: a LiDAR scanner configured to scan a current set of regions-of-interest (ROIs) within a field-of-view (FOV), ([0027]: The sensor suite 150 preferably includes localization and driving sensors; e.g., photodetectors, cameras, RADAR, SONAR, LIDAR). Agrawal further teaches, ([0001]: The present disclosure relates generally to autonomous vehicles (AVs) and, more specifically, to devices and methods for intent-based dynamic change of resolution, region of interest (ROI)). Agrawal teaches, A LiDAR perception sub-system coupled to the LiDAR scanner, the LIDAR perception sub-system including one or more processors, a memory device, and processor- executable instructions stored in the memory device, ([Abstract]: The present disclosure provides perception system for a vehicle that includes a plurality of imaging devices). Agrawal further teaches, ([0039]: “Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs”. “The functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities”). Agrawal teaches, instructions for obtaining sensor data provided at least by the LiDAR scanner; obtaining one or more predefined perception policies; determining whether an ROI reconfiguration request is provided, the ROI reconfiguration request being provided based on a vehicle perception decision, ([Abstract]: “A perception filter for receiving the images produced by the imaging devices”. “The perception filter determines compute resource priority instructions”). Agrawal further teaches, ([0019]: This resource allocation can be changed at real-time depending upon the relative importance of the resource data as dictated by the ROI(s) based on autonomous vehicle state and intent. For example, if the autonomous vehicle is changing lanes at highway speed, the ROI will be in the direction of the intended lane change (e.g., left or right) at a substantial distance. In accordance with features of embodiments described herein, the perception system may allocate greater compute resources to the camera and/or other sensor(s) directed to the identified ROI). Agrawal teaches, in accordance with an ROI reconfiguration request being provided, determining a next set of ROIs for the LiDAR scanner to scan based on a current set of ROIs, and the one or more predefined perception policies, and the ROI reconfiguration request, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. In step 325, the compute module implements the compute resource priority instructions by allocating resources to the imagers and sensors comprising the sensor suite in accordance with the priority instructions). 6. Regarding Claim 2. Agrawal teaches, the one or more predefined perception policies comprise one or more predefined perceptions and one or more policies associated with the one or more predefined perceptions, ([0034]: Examples 1-4 listed in Table 1 are illustrated in and will be described in greater detail with reference to FIGS. 4A-4D. Referring to FIG. 4A, which illustrates example 1 from Table 1, a vehicle 400 is driving straight on a highway (relatively high speed) without planning to make a lane change. The image/sensor information provided by the perception filter to the perception module may include a high-resolution sensor crop from a region of the image frame that corresponds to the road section for long-range detection. Additionally, scaled information from other sensors may also be provided for low-range detection in other regions. The compute module may be instructed to prioritize compute resources for forward sensors to keep system latency low. Moreover, LIDAR and RADAR devices are instructed to densely scan the small field of view (FOV) in front of the vehicle corresponding to the region of interest (ROI) and to sparsely scan FOV outside the ROI). 7. Regarding Claim 4: Agrawal teaches, determining the next set of ROIs comprises determining one or more policy-based ROI candidates based on the sensor data and the one or more predefined perception policies by: deriving one or more current perceptions based on the sensor data; correlating the one or more current perceptions with the one or more predefined perceptions of the one or more predefined perception policies; and determining one or more policy-based ROI candidates based on the one or more policies associated with the one or more predefined perceptions, ([0010]: Embodiments of the present disclosure also provide an autonomous vehicle (“AV”) including an onboard computer; a sensor suite comprising a plurality of imaging devices; and a perception system. The perception system includes a plurality of imaging devices for producing images of an environment of the AV; a perception filter for receiving the images produced by the imaging devices, wherein the perception filter determines compute resource priority instructions based on an intent of the AV and a current state of the AV; and a compute module for receiving the compute resource priority instructions from the perception filter and allocating compute resources among the imaging devices in accordance with the compute resource priority instructions). Agrawal further teaches, ([0036]: Referring to FIG. 4C, which illustrates Example 3 from Table 1, a vehicle 420 is planning to make a right turn at a residential (relatively low speed) intersection. The image/sensor information provided by the perception filter to the perception module may include high-resolution sensor information from the front and left of the vehicle for long range detection in those regions). 8. Regarding Claim 5: Agrawal teaches, each of the one or more policy-based ROI candidates comprises one or more of: a position of each of the policy-based ROI candidate; a policy priority associated with each of the policy-based ROI candidate; and one or more scan parameters associated with each of the policy-based ROI candidate, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. ([0029]: Cameras 210 may be implemented using high-resolution imagers with fixed mounting and field of view. LIDARs 215 may be implemented using scanning LIDARs with dynamically configurable field of view that provides a point-cloud of the region intended to scan). 9. Regarding Claim 11. Agrawal teaches, the ROI reconfiguration request comprises: one or more ROI candidates for the next set of ROIs of the LiDAR scanner; and at least one of priority data or vehicle perception data associated with the one or more ROI candidates, ([0036]: Referring to FIG. 4C, which illustrates Example 3 from Table 1, a vehicle 420 is planning to make a right turn at a residential (relatively low speed) intersection. The image/sensor information provided by the perception filter to the perception module may include high-resolution sensor information from the front and left of the vehicle for long range detection in those regions). Agrawal further teaches, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. In step 325, the compute module implements the compute resource priority instructions by allocating resources to the imagers and sensors comprising the sensor suite in accordance with the priority instructions). 10. Regarding Claim 13: Agrawal teaches, the LiDAR scanning system is integrated in or mounted to a vehicle, ([0027]: The sensor suite 150 preferably includes localization and driving sensors; e.g., photodetectors, cameras, RADAR, SONAR, LIDAR, GPS, inertial measurement units (IMUs), accelerometers, microphones, strain gauges, pressure monitors, barometers, thermometers, altimeters, etc). Fig. 1 shows sensor suite 150 mounted to a vehicle. 11. Regarding Claim 15: Agrawal teaches, the vehicle perception decision is rendered based on at least one of geographical location data associated with the vehicle or vehicle posture data, ([0028]: Referring now to FIG. 2, illustrated therein is a perception system 200 for an autonomous vehicle, such as the autonomous vehicle 110. Part or all of the perception system 200 may be implemented as a sensor suite, such as the sensor suite 150, and/or an onboard computer, such as onboard computer 145. As shown in FIG. 2, the perception system includes a perception filter 205, which comprises hardware and/or software for processing information and data from a variety of sources, including but not limited to cameras 210, LIDARS 215, RADARs 220, vehicle state 225, vehicle intent 230 (which may be based on/derived from the planned route), and/or world map information 235). Agrawal further teaches, ([0029]: Vehicle state 225 includes the current position, velocity, and other state(s) of the vehicle. Vehicle intent 230 includes the intent of the vehicle, such as lane change, turning, etc. World map 235 is a high-definition map of the world, which includes semantics and height information). 12. Regarding Claim 17: Agrawal teaches, the vehicle perception decision is rendered based on sensor data indicating at least one of current weather conditions or future weather conditions, ([0025]: Driving behavior may include any information relating to how an autonomous vehicle drives (e.g., actuates brakes, accelerator, steering) given a set of instructions (e.g., a route or plan). Driving behavior may include a description of a controlled operation and movement of an autonomous vehicle and the manner in which the autonomous vehicle applies traffic rules during one or more driving sessions. Driving behavior may additionally or alternatively include any information about how an autonomous vehicle calculates routes (e.g., prioritizing fastest time vs. shortest distance), other autonomous vehicle actuation behavior (e.g., actuation of lights, windshield wipers, traction control settings, etc.) and/or how an autonomous vehicle responds to environmental stimulus (e.g., how an autonomous vehicle behaves if it is raining, or if an animal jumps in front of the vehicle)). 13. Regarding Claim 20: Agrawal teaches, system configured for providing regions-of-interest (ROIs) reconfiguration, ([0027]: The sensor suite 150 preferably includes localization and driving sensors; e.g., photodetectors, cameras, RADAR, SONAR, LIDAR, GPS, inertial measurement units (IMUs), accelerometers, microphones, strain gauges, pressure monitors, barometers, thermometers, altimeters, etc). ([0001]: The present disclosure relates generally to autonomous vehicles (AVs) and, more specifically, to devices and methods for intent-based dynamic change of resolution, region of interest (ROI)). Agrawal teaches, a vehicle perception and planning system including one or more processors, a memory device, and processor-executable instructions stored in the memory device, ([0013]: As will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of a perception system for an autonomous vehicle, described herein, may be embodied in various manners (e.g., as a method, a system, a computer program product, or a computer-readable storage medium)). Agrawal further teaches, ([0039]: More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs). Agrawal teaches, instructions for: obtaining a current set of ROIs used by a LiDAR scanner; obtaining one or more predefined perception policies; determining whether an ROI reconfiguration request is provided, the ROI reconfiguration request being provided based on a vehicle perception decision; in accordance with an ROI reconfiguration request being provided, determining a next set of ROIs for the LiDAR scanner to scan based on the one or more predefined perception policies and the ROI reconfiguration request, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. In step 325, the compute module implements the compute resource priority instructions by allocating resources to the imagers and sensors comprising the sensor suite in accordance with the priority instructions). Agrawal further teaches, ([0019]: This resource allocation can be changed at real-time depending upon the relative importance of the resource data as dictated by the ROI(s) based on autonomous vehicle state and intent. For example, if the autonomous vehicle is changing lanes at highway speed, the ROI will be in the direction of the intended lane change (e.g., left or right) at a substantial distance. In accordance with features of embodiments described herein, the perception system may allocate greater compute resources to the camera and/or other sensor(s) directed to the identified ROI). 14. Regarding Claim 21: Agrawal teaches, A method for reconfiguring one or more regions-of-interest (ROIs) of a light detection and ranging (LiDAR) scanner, the method being performed by one or more processors and memory, ([0039]: Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards). Agrawal teaches, obtaining sensor data provided at least by the LiDAR scanner; obtaining one or more predefined perception policies; determining whether an ROI reconfiguration request is provided, the ROI reconfiguration request being provided based on a vehicle perception decision, ([Abstract]: The perception filter determines compute resource priority instructions). Agrawal further teaches, ([0019]: This resource allocation can be changed at real-time depending upon the relative importance of the resource data as dictated by the ROI(s) based on autonomous vehicle state and intent. For example, if the autonomous vehicle is changing lanes at highway speed, the ROI will be in the direction of the intended lane change (e.g., left or right) at a substantial distance. In accordance with features of embodiments described herein, the perception system may allocate greater compute resources to the camera and/or other sensor(s) directed to the identified ROI). Agrawal teaches, in accordance with an ROI reconfiguration request being provided, determining a next set of ROIs for the LiDAR scanner to scan based on a current set of ROIs, and the one or more predefined perception policies, and the ROI reconfiguration request, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. In step 325, the compute module implements the compute resource priority instructions by allocating resources to the imagers and sensors comprising the sensor suite in accordance with the priority instructions). Agrawal further teaches, ([0019]: This resource allocation can be changed at real-time depending upon the relative importance of the resource data as dictated by the ROI(s) based on autonomous vehicle state and intent). 15. Regarding Claim 22. Agrawal teaches, the one or more predefined perception policies comprise one or more predefined perceptions and one or more policies associated with the one or more predefined perceptions, ([0034]: Examples 1-4 listed in Table 1 are illustrated in and will be described in greater detail with reference to FIGS. 4A-4D. Referring to FIG. 4A, which illustrates example 1 from Table 1, a vehicle 400 is driving straight on a highway (relatively high speed) without planning to make a lane change. The image/sensor information provided by the perception filter to the perception module may include a high-resolution sensor crop from a region of the image frame that corresponds to the road section for long-range detection. Additionally, scaled information from other sensors may also be provided for low-range detection in other regions. The compute module may be instructed to prioritize compute resources for forward sensors to keep system latency low. Moreover, LIDAR and RADAR devices are instructed to densely scan the small field of view (FOV) in front of the vehicle corresponding to the region of interest (ROI) and to sparsely scan FOV outside the ROI). 16. Regarding Claim 23: Agrawal teaches, determining the next set of ROIs comprises determining one or more policy-based ROI candidates based on the sensor data and the one or more predefined perception policies by: deriving one or more current perceptions based on the sensor data, correlating the one or more current perceptions with the one or more predefined perceptions of the one or more predefined perception policies; and determining one or more policy-based ROI candidates based on the one or more policies associated with the one or more predefined perceptions, ([0036]: Referring to FIG. 4C, which illustrates Example 3 from Table 1, a vehicle 420 is planning to make a right turn at a residential (relatively low speed) intersection. The image/sensor information provided by the perception filter to the perception module may include high-resolution sensor information from the front and left of the vehicle for long range detection in those regions). 17. Regarding Claim 24: Agrawal teaches, each of the one or more policy-based ROI candidates comprises one or more of: a position of each of the policy-based ROI candidate; a policy priority associated with each of the policy-based ROI candidate; and one or more scan parameters associated with each of the policy-based ROI candidate, ([0009]: Embodiments of the present disclosure provide a perception system for a vehicle. The perception may include a plurality of imaging devices for producing images of an environment of the vehicle; a perception filter for receiving the images produced by the imaging devices, wherein the perception filter determines compute resource priority instructions based on an intent of the vehicle and a current state of the vehicle; and a compute module for receiving the compute resource priority instructions from the perception filter and allocating compute resources among the imaging devices in accordance with the compute resource priority instructions). Agrawal further teaches, ([0029]: LIDARs 215 may be implemented using scanning LIDARs with dynamically configurable field of view that provides a point-cloud of the region intended to scan). 18. Regarding Claim 28: Agrawal teaches, A non-transitory computer readable medium storing one or more programs, ([0013]: As will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of a perception system for an autonomous vehicle, described herein, may be embodied in various manners (e.g., as a method, a system, a computer program product, or a computer-readable storage medium)). Agrawal teaches, one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: obtain sensor data provided at least by the LiDAR scanner, ([0027]: The sensor suite 150 preferably includes localization and driving sensors; e.g., photodetectors, cameras, RADAR, SONAR, LIDAR, GPS, inertial measurement units (IMUs), accelerometers, microphones, strain gauges, pressure monitors, barometers, thermometers, altimeters, etc). Agrawal further teaches, ([0039]: Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards). Agrawal teaches, obtain one or more predefined perception policies; determine whether an ROI reconfiguration request is provided, the ROI reconfiguration request being provided based on a vehicle perception decision; in accordance with an ROI reconfiguration request being provided, determine a next set of ROIs for the LiDAR scanner to scan based on a current set of ROIs, and the one or more predefined policies, and the ROI reconfiguration request, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. In step 325, the compute module implements the compute resource priority instructions by allocating resources to the imagers and sensors comprising the sensor suite in accordance with the priority instructions). Agrawal further teaches, ([0019]: This resource allocation can be changed at real-time depending upon the relative importance of the resource data as dictated by the ROI(s) based on autonomous vehicle state and intent). Claim Rejections – 35 USC § 103 19. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 20. Claims 6-8 & 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al (US 20210099643 A1), hereinafter Agrawal, in view of Cardei et al (US 11284021 B1), hereinafter Cardei. 21. Regarding Claims 6 & 25: Agrawal does not teach, determining the next set of ROIs comprises determining one or more request-based ROI candidates based on the ROI reconfiguration request, each of the one or more request-based ROI candidates comprises one or more of: a position of each of the request-based ROI candidate; a requested priority associated with each of the request-based ROI candidate; and one or more scan parameters associated with each of the request-based ROI candidate. However, Cardei teaches, ([Col. 3, Lines 41-46]: Sensors may be provided on an autonomous vehicle to assist with perception of and navigation through various environments. These sensors may include image sensors, light detection and ranging (LIDAR) devices, and/or radio detection and ranging (RADAR) devices, among others). Cardei further teaches, ([Abstract]: A system includes an image sensor having a plurality of pixels that form a plurality of regions of interest (ROIs), image processing resources, and a scheduler configured to perform operations including determining a priority level for a particular ROI of the plurality of ROIs based on a feature detected by one or more image processing resources of the image processing resources within initial image data associated with the particular ROI. The operations also include selecting, based on the feature detected within the initial image data, a particular image processing resource of the image processing resources by which subsequent image data generated by the particular ROI is to be processed. The operations further include inserting, based on the priority level, the subsequent image data into a processing queue of the particular image processing resource to schedule the subsequent image data for processing by the particular image processing resource). Cardei also teaches, ([Col. 3, Lines 57-67]: Accordingly, sensing components and corresponding circuitry may be provided that divide a field of view of the sensor into a plurality of regions of interest (ROIs) and allow for selective readout of sensor data from individual ROIs). Cardei continues to teach, ([Col. 21, Lines 9-25) With respect to any or all of the message flow diagrams, scenarios, and flow charts in the figures and as discussed herein, each step, block, and/or communication can represent a processing of information and/or a transmission of information in accordance with example embodiments. Alternative embodiments are included within the scope of these example embodiments). It would have been obvious for one of ordinary skill in the art at the time of filing to modify Agrawal with Cardei since it is the same field of endeavor and results would be predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Agrawal with Cardei, (Cardei: [Col. 1, Lines 21-22]: To avoid oversubscription of some image processing resources). 22. Regarding Claims 7 & 26: Agrawal does not teach, determining the next set of ROIs for the LiDAR scanner to scan based on the current set of ROIs, and one or both of the one or more policy-based ROI candidates and the one or more request-based ROI candidates comprises: determining one or more ROI scan candidates based on one or both of the one or more policy-based ROI candidates and the one or more request-based ROI candidates, wherein each of the one or more ROI scan candidates comprises one or more of: a position of each of the one or more ROI scan candidates; a scan priority associated with each of the one or more ROI scan candidates; and one or more scan parameters associated with each of the one or more ROI scan candidates; updating the one or more ROI scan candidates into an ROI scan list, the ROI scan list comprising the one or more ROI scan candidates; configuring the LiDAR scanner to scan the next set of ROIs, the next set of ROIs being the one or more ROI scan candidates in the ROI scan list having a highest scan priority. However, Cardei teaches a LiDAR system, ([Col. 13, Lines 45-58]: FIG. 3B illustrates example processing queues of a plurality of image processing resources. Scheduler 302 may be configured to schedule ROI image data generated by a particular ROI for processing by an image processing resource by adding the ROI image data to a processing queue of the image processing resource. Accordingly, FIG. 3B illustrates processing queue 362 of pixel-level processing circuitry 320, processing queue 364 of pixel level processing circuitry 322, and processing queue 364 of machine learning circuitry 350. Each of pixel-level processing circuitries 324-330, machine learning circuitries 340-344, and/or control system 360 may also be associated with corresponding queues, which are not shown, but are indicated by the ellipses in FIG. 3B). Cardei further teaches, ([Col. 14, Lines 11-29]: The processing queue in which scheduler 302 places the image data acquired from a particular ROI, and the position of this image data within the queue, may depend on the current and/or expected contents of the ROI image data. Thus, detected feature queue 370 may include, for a particular detected feature, data identifying (i) the ROI in which the feature was detected, (ii) the feature (e.g., the classification, type, of other indicator thereof) that was detected within the ROI, (iii) attributes, properties, and/or characteristics of the detected feature (e.g., distance, speed, etc.), and/or (iv) a time at which initial image data, within which the feature was detected, was captured, among other information. Accordingly, based on this data, scheduler 302 may be configured to determine a corresponding ROI in which the detected feature is expected to be observed at a future time, assign a priority level to the detected feature and/or the corresponding ROI, and select an image processing resource by way of which subsequent image data acquired from the corresponding ROI at the future time is to be processed). (See Claims 6 & 25). 23. Regarding Claims 8 & 27: Agrawal does not teach, determining one or more ROI scan candidates based on one or both of the one or more policy-based ROI candidates and the one or more request-based ROI candidates comprises determining the scan priority of each of the one or more ROI scan candidates based on one or more of: one or more scan priorities associated with the current set of ROIs, the policy priority associated with each of the one or more policy-based ROI candidates, the requested priority associated with each of the one or more request-based ROI candidates, and priority determination rules. However, Cardei teaches, (see claims 7 & 26). 24. Claims 9-10, 14, 16, & 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al (US 20210099643 A1), hereinafter Agrawal, in view of Halder et al (US 20190310636 A1), hereinafter Halder. 25. Regarding Claim 9: Agrawal does not teach, the processor-executable instructions comprise further instructions for obtaining additional sensor data from at least one of: one or more additional vehicle onboard sensors of the vehicle; one or more additional vehicles; and one or more transportation infrastructure systems. However, Halder teaches, ([0024]: FIG. 2B illustrates software modules (e.g., program, code, or instructions executable by one or more processors of an autonomous vehicle) that may be used to implement the various subsystems of an autonomous vehicle management system according to certain embodiments). Halder further teaches, ([0057]: “Sensors 110 may be located on or in autonomous vehicle 120 (“onboard sensors”) or may even be located remotely (“remote sensors”) from autonomous vehicle 120”. “FIG. 3 illustrates an example set of sensors 110 of an autonomous vehicle, including, without limitation, LIDAR (Light Detection and Ranging) sensors 302, radar 304, cameras 306 (different kinds of cameras with different sensing capabilities may be used), Global Positioning System (GPS) and Inertial Measurement Unit (IMU) sensors 308, Vehicle-to-everything (V2X) sensors 308, audio sensors, and the like”). Halder also teaches, ([0058]: For example, autonomous vehicle 120 may use a V2X sensor for passing and/or receiving information from a vehicle to another entity around or near the autonomous vehicle. A V2X communication sensor/system may incorporate other more specific types of communication infrastructures such as V2I (Vehicle-to-Infrastructure), V2V (Vehicle-to-vehicle), V2P (Vehicle-to-Pedestrian), V2D (Vehicle-to-device), V2G (Vehicle-to-grid), and the like). It would have been obvious for one of ordinary skill in the art at the time of filing to modify Agrawal with Halder since it is the same field of endeavor and results would be predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Agrawal with Halder since, (WHAT TO KNOW ABOUT V2V AND V2I TECHNOLOGIES, [P. 1]: The U.S. Department of Transportation estimates that V2V technology could reduce unimpaired driver accidents by as much as 80 percent and that V2I adds an additional 12 percent reduction in unimpaired driver accidents). 26. Regarding Claim 10: Agrawal does not teach, the vehicle perception decision is rendered based on one or more of: the sensor data provided at least by the LiDAR scanner; and the additional sensor data. However, Halder teaches a LiDAR system, ([0061]: Autonomous vehicle management system 122 receives sensors data from sensors 110 on a periodic or on-demand basis. Autonomous vehicle management system 122 uses the sensor data received from sensors 110 to perceive the autonomous vehicle's surroundings and environment. Autonomous vehicle management system 122 uses the sensor data received from sensors 110 to generate and keep updated a digital model that encapsulates information about the state of autonomous vehicle and of the space and environment surrounding autonomous vehicle 120. This digital model may be referred to as the internal map, which encapsulates the current state of autonomous vehicle 120 and its environment. The internal map along with other information is then used by autonomous vehicle management system 122 to make decisions regarding actions (e.g., navigation, braking, acceleration) to be performed by autonomous vehicle 120. Autonomous vehicle management system 122 may send instructions or commands to vehicle systems 112 to cause the actions be performed by the systems of vehicles systems 112). (See Claim 9). 27. Regarding Claim 14: Agrawal does not teach, the vehicle perception decision is rendered based on sensor data of one or more objects provided by at least one of a camera, an ultrasonic sensor, or a radar; and wherein the ROI reconfiguration request is provided based on one or more of: a confidence parameter associated with the vehicle perception decision; a level of importance threshold value associated with the vehicle perception decision; a completeness parameter associated with the vehicle perception decision; and a level of urgency threshold value associated with the vehicle perception decision. However, Halder teaches, ([0057]: “sensors 110 of an autonomous vehicle, including, without limitation, LIDAR (Light Detection and Ranging) sensors 302, radar 304, cameras 306”. “Other sensors may include proximity sensors, SONAR sensors, and other sensors”). Halder further teaches, ([0197]: As depicted in FIG. 9 and described above, sensor fusion subsystem 910 receives model prediction 912 from trained AI model 906 and also receives the confidence score generated for the AI model from EUC subsystem 904. Sensor fusion subsystem 910 may then determine, based upon the confidence score, whether or not the prediction 912 is to be used for downstream decision-making. In some instances, if the score is below a certain threshold, thereby indicating that the real time sensor data input is different from the training data used to train the AI model, sensor fusion subsystem 910 may determine that the model prediction 912 is not to be used). Halder also teaches, ([0193]: During the runtime processing, as shown in FIG. 9, real time sensor data from sensors 902 may be communicated to trained AI model 906 and also to epistemic uncertainty checker (EUC) subsystem 904. The trained AI model 906 may make a prediction 912 (e.g., identification of an object in the ego vehicle's environment) based upon the received real time sensory data inputs. The model prediction may be communicated to a sensor fusion subsystem 910). Halder continues to teach, ([0157]: A neural network comprises multiple nodes arranged in layers. Each node receives an input from some other nodes, or from an external source, and computes an output. Each input to a node has an associated weight that is assigned based upon the relative importance of that input to other inputs). Halder goes on to teach, ([0209]: Further, the safety considerations themselves may be prioritized such that some considerations are considered more (or less) important than other considerations). It would have been obvious for one of ordinary skill in the art at the time of filing to modify Agrawal with Halder since it is the same field of endeavor and results would be predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Agrawal with Halder since, (Halder: [0009]: An infrastructure is provided that improves the safety of autonomous systems such as autonomous vehicles, autonomous machines, and the like). 28. Regarding Claim 16: Agrawal does not teach, the vehicle perception decision is rendered based on sensor data provided by at least one of roadside sensors, parking structure sensors, road intersection devices, or sensors from one or more additional vehicles. However, Halder teaches, (See Claim 9). 29. Regarding Claim 18: Agrawal does not teach, the ROI reconfiguration request is provided based on a user-requested task. However, Halder teaches, ([0245]: “As mentioned earlier, various types of information may be output to a user of an autonomous vehicle including, in certain embodiments, information about future planned actions to be performed by the autonomous vehicle. In particular, the actions indicated by the information output to the user may correspond to actions included in a plan of action generated by one or more components (e.g., planning subsystem 206) within an autonomous vehicle management system”. “The future action indicated by a user interface is an action planned several seconds ahead of time. As explained in connection with the example process of FIG. 19, the planned action may not necessarily be an action that the autonomous vehicle management system has committed to performing at the time the user interface is presented to the user. Instead, the planned action may, in certain embodiments, be confirmed or canceled through further processing by the autonomous vehicle management system or through user intervention”). It would have been obvious for one of ordinary skill in the art at the time of filing to modify Agrawal with Halder since it is the same field of endeavor and results would be predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Agrawal with Halder since, (Halder: [0049]: It also allows the user to take manual actions (e.g., emergency actions), where appropriate, to override the planned actions. The information may also be output to a person or object or system in the autonomous vehicle's environment (e.g., to a remote user monitoring the operations of the autonomous vehicle)). This directly impacts the safety of everyone in and around the AV, allowing the AV operator to override certain decisions made by the AV. For instance, the AV operator may be familiar with a particular stretch of road which includes a shortcut between a school and an adjoining neighborhood. The AV operator may be aware that in this area small children are often crossing the road outside of a cross walk, without adult supervision. The driver may instruct the vehicle to slow down in this area and pay close attention to pedestrians on the side of the road who may cross unexpectedly. Thus, allowing the driver to impart their own knowledge and experience to the AV, improving overall safety. 30. Regarding Claim 19: Agrawal does not teach, the processor-executable instructions comprise further instructions for: in accordance with a ROI reconfiguration request not being received, determining the next set of ROIs of the LiDAR scanner to be the current set of ROIs. However, Halder teaches a LiDAR system, [0093] Based upon the one or more inputs, planning subsystem 206 generates a plan of action for autonomous vehicle 120. Planning subsystem 206 may update the plan on a periodic basis as the environment of autonomous vehicle 120 changes, as the goals to be performed by autonomous vehicle 120 change, or in general, responsive to changes in any of the inputs to planning subsystem 206. Halder further teaches, [0122] For a sensor receiving such an instruction from autonomous vehicle management system 122, the behavior of the sensor is changed as a result of the instruction. The behavior of the sensor is changed such that the behavior of the sensor after receiving the instruction is different from the behavior of the sensor just prior to receiving the instruction. Thus it is clear, absent a specific request or instructions the ROI configuration will not change. It would have been obvious for one of ordinary skill in the art at the time of filing to modify Agrawal with Halder since it is the same field of endeavor and results would be predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Agrawal with Halder since, changing the ROI configuration without specific instructions from the perception subsystem or vehicle management system, based on a change in vehicle goal, intent, environment, or any other influencing factor, could have a significant impact on safety. In the worst case this would be tantamount to a human driver wildly shifting focus from one field of view to another with no correlation to the task at hand. 31. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al (US 20210099643 A1), hereinafter Agrawal, in view of Halder et al (US 20190310636 A1), hereinafter Halder as evidenced by Williams et al. (Remote Sensing), hereinafter Williams. 32. Regarding Claim 12: Agrawal teaches, one or both of the policy-based ROI candidates and the one or more request-based ROI candidates are provided by at least one of: the LiDAR perception sub-system, ([Abstract]: The present disclosure provides perception system for a vehicle that includes a plurality of imaging devices). Agrawal further teaches, ([0027]: The sensor suite 150 preferably includes localization and driving sensors; e.g., photodetectors, cameras, RADAR, SONAR, LIDAR, GPS, inertial measurement units (IMUs), accelerometers, microphones, strain gauges, pressure monitors, barometers, thermometers, altimeters, etc). Agrawal also teaches, ([0031]: Additionally and/or alternatively, in step 320, the perception system (e.g., the perception filter) determines compute resource priority based on ROI and/or current state and intent of the autonomous vehicle and provides compute resource priority instructions to a compute module of the perception system. In step 325, the compute module implements the compute resource priority instructions by allocating resources to the imagers and sensors comprising the sensor suite in accordance with the priority instructions). Agrawal teaches, A vehicle perception and planning system, wherein the vehicle perception and planning system comprises at least one of a vehicle-embedded system, ([0001]: The present disclosure relates generally to autonomous vehicles (AVs) and, more specifically, to devices and methods for intent-based dynamic change of resolution, region of interest (ROI)). Agrawal does not teach, or a distributed system including one or more computing devices external to a vehicle including elements of the vehicle perception and planning system. However, Halder teaches, ([0057]: Sensors 110 may be located on or in autonomous vehicle 120 (“onboard sensors”) or may even be located remotely (“remote sensors”) from autonomous vehicle 120. Autonomous vehicle management system 122 may be communicatively coupled with remote sensors via wireless links using a wireless communication protocol. Sensors 110 can obtain environmental information for autonomous vehicle 120. This sensor data can then be fed to autonomous vehicle management system 122. FIG. 3 illustrates an example set of sensors 110 of an autonomous vehicle, including, without limitation, LIDAR (Light Detection and Ranging) sensors 302, radar 304, cameras 306 (different kinds of cameras with different sensing capabilities may be used), Global Positioning System (GPS) and Inertial Measurement Unit (IMU) sensors 308, Vehicle-to-everything (V2X) sensors 308, audio sensors, and the like. Sensors 110 can obtain (e.g., sense, capture) environmental information for autonomous vehicle 120 and communicate the sensed or captured sensor data to autonomous vehicle management system 122 for processing). Halder further teaches, ([0058]: For example, autonomous vehicle 120 may use a V2X sensor for passing and/or receiving information from a vehicle to another entity around or near the autonomous vehicle. A V2X communication sensor/system may incorporate other more specific types of communication infrastructures such as V2I (Vehicle-to-Infrastructure), V2V (Vehicle-to-vehicle), V2P (Vehicle-to-Pedestrian), V2D (Vehicle-to-device), V2G (Vehicle-to-grid), and the like). Such V2X enabled devices will include computing devices, for instance Mobile LiDAR Systems (MLS) may employ V2V or V2I communication. These devices invariably include computing devices, (Williams: [P. 4654-4655]: though there are many MLS mapping systems, most systems consist of five distinct components: (1) The mobile platform; (2) Positioning hardware (e.g., GNSS, IMU); (3) 3D laser scanner(s); (4) Photographic/video recording; and (5) Computer and data storage). (See Claim 9) 33. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al (US 20210099643 A1), hereinafter Agrawal, in view of Campbell et al (US 20190107623 A1), hereinafter Campbell. 34. Regarding Claim 3: Agrawal teaches, the one or more predefined perceptions comprise one or more of: a predefined vehicle-turning perception, ([0036]: a vehicle 420 is planning to make a right turn at a residential (relatively low speed) intersection. The image/sensor information provided by the perception filter to the perception module may include high-resolution sensor information from the front and left of the vehicle for long range detection in those regions). Agrawal teaches, a predefined road horizon perception, ([0018]: For example, if the autonomous vehicle is traveling on a highway at high speed, straight ahead at a long range to the horizon is likely the most important area on which to focus, or region of interest (“ROI”)). Agrawal does not teach, a predefined vehicle uphill or downhill perception; a predefined moving object perception; and a predefined possible obstacle perception. However, Campbell teaches a predefined vehicle uphill or downhill perception, ([0151]: “Scan profile 500 may include a grade of a road on which the vehicle 520 is operating or a change in the grade of a road on which the vehicle 520 is operating. The grade of a road represents the slope of a road and may be referred to as a gradient, incline, pitch, rise, or slope”. “As another example, if the road ahead begins to slope downward (e.g., there is a downhill section ahead), then the lidar system 100 may apply a downward angular offset to shift the scan pattern downward)”). Campbell teaches a predefined moving object perception and a predefined possible obstacle perception, ([0050]: In particular embodiments, target 130 may include all or part of an object that is moving or stationary relative to lidar system 100. As an example, target 130 may include all or a portion of a person, vehicle, motorcycle, truck, train, bicycle, wheelchair, pedestrian, animal, road sign, traffic light, lane marking, road-surface marking, parking space, pylon, guard rail, traffic barrier, pothole, railroad crossing, obstacle in or near a road, curb, stopped vehicle on or beside a road, utility pole, house, building, trash can, mailbox, tree, any other suitable object, or any suitable combination of all or part of two or more objects). It would have been obvious for one of ordinary skill in the art at the time of filing to modify Agrawal with Campbell since it is the same field of endeavor and results would be predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Agrawal with Campbell since, (Agrawal: [0002]: Accurately and quickly perceiving an autonomous vehicle's environment and surroundings is of the utmost importance for the vehicle). In addition, predefining as many of the possible perceptions an AV may encounter will reduce uncertainty as well as onboard processing requirements. Claim Rejections – 35 USC § 112(b) 35. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 36. Claim14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 37. Regarding Claim 14: The terms level of importance threshold, completeness parameter, and level of urgency threshold in the Claim are not defined in the immediate specification. The terms in question could impart multiple meanings or interpretations due to the generic nature of the terms. Thus, one of ordinary skill in the art can not properly identify the specific limitations imparted. For purposes of examination, importance threshold will be interpreted as, a weight assigned to input nodes or sensors defining relative importance. For purposes of examination, completeness parameter will be interpreted as a value to determine whether or not to use an input in downstream decision-making. For purposes of examination, level of urgency threshold will be interpreted as a weight or priority indicating importance relative to safety. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. WO 2019095013 A1: Discloses systems and methods to improve performance, reliability, learning and safety and thereby enhance autonomy of vehicles. US 5793491 A: Discloses an Intelligent Vehicle Highway System (IVHS) sensor provides accurate information on real-time traffic conditions that can be used for incident detection, motorist advisories, and traffic management via signals, ramp meters, and the like. US 20170365105 A1: Discloses a processor configured to detect an operating condition of a first vehicle based on at least one sensor of a second vehicle in communication with the processor. The processor is also configured to wirelessly broadcast the operating condition or associated alert including any vehicle identifying traits of the first vehicle as detected by the second-vehicle sensor or other detection systems of the second vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES W NAPIER whose telephone number is (571)272-7451. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Hodge can be reached at (571) 272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.W.N./Examiner, Art Unit 3645 /ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Jan 03, 2023
Application Filed
Jan 15, 2026
Non-Final Rejection — §102, §103, §112
Apr 15, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month