Prosecution Insights
Last updated: April 19, 2026
Application No. 18/530,660

PROCESSING FOR MACHINE LEARNING BASED OBJECT DETECTION USING SENSOR DATA

Non-Final OA §102§103
Filed
Dec 06, 2023
Examiner
VAUGHN, ALEXANDER JOSEPH
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
11 granted / 15 resolved
+11.3% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
30.0%
-10.0% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 11-12, 14-15, 20-23, 26-28 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Senthil et al. (US 20230116538 A1), hereinafter Senthil. Regarding claim 1, Senthil teaches A device, comprising: one or more memories; and one or more processors, coupled to the one or more memories, configured to: (Para. 60 see "Smart sensor methods and systems are described herein… device includes a sensor, a memory, a network connection, and two processing units." Para. 65 see "The smart sensor system 200 includes an image sensor 202, storage memory 206, one or more compute units 204 and an interface device 208. All elements 202-208 may be in communication with each other such as via data bus 210. Storage memory 206 may comprise separate memories 206a, 206b that may be physical memories, or may be logically separate portions of memory 206. Compute unit(s) 204 may each perform discrete processing operations."). obtain sensor data associated with identifying measured properties of at least one object in an environment; (Para. 61 see "a sensor device 102 may sense an area 120, process the sensed data... The sensor device 102 may include a camera for capturing images of a scene such as area 120." Para. 63 see "an object recognition system for recognizing objects in area 120." Para. 71 see "sensor data may be annotated with the results of the processing of sensor data. For example, processing of image sensor data may produce an inference from the captured images about the scene observed by the sensor. Such an inference may include as identification of a region in an image that contains a moving object, or the inference may identify of the type of object that is moving."). detect a trigger event associated with at least one of the environment or the device; (Para. 67 see "The first compute unit may compare current image data in memory 206a to reference image data previous captured by the sensor stored in memory 206b to generate a control signal for controlling other compute units. For example, if the current data is within a threshold difference of the reference data, a second compute unit may be disabled and may not processes the current data. Alternately, if the comparison is above the threshold, the second compute unit may be enabled and may processes the current data to generate current sensor output data."). modify, based at least in part on detecting the trigger event, at least one of: one or more pre-processing operations associated with the sensor data for input to a neural network, or one or more post-processing operations associated with an object detection output of the neural network; (Para. 94 see "the control signals for controlling processing of image data (such as by box 609 of FIG. 6A) may include priorities for discrete portions of data instead of a simple enable/disable control per portion. The priorities may then be used to control subsequent processing." Para. 96 see "per-portion priorities may control other aspects of the processing. Preprocessing to change image resolution may be controlled by the priorities, such that higher priority portions of sensor data are kept at a higher resolution, while lower priority portions are reduced to a lower resolution. This may allow subsequent processing, such as by a neural network."). perform the one or more pre-processing operations associated with the sensor data to generate pre-processed sensor data; (Para. 68 see "the preprocessor may convert the raw sensor data from a raw image format to a different format that is appropriate for consumption by the analysis unit. Non-limiting examples of preprocessing for raw image sensor data might include changing the bit depth of image pixel values, converting the color space of the raw image sensor data, and/or analyzing data to determine heuristics." Para. 89 see "the processing of data immediately prior to analysis by a neural network, as in boxes 409, 411 may include preprocessing for the purpose of consumption by the neural network. Such preprocessing may include converting a data format, adjusting the amount of data by reducing a resolution or subsampling techniques, performing statistical analysis, or heuristic analysis of the raw data."). generate the object detection output for the at least one object based at least in part on detecting the at least one object using the pre-processed sensor data as the input to the neural network; (Para. 66 see "a control unit to generate control signals that control the sensor and the various other compute units; a preprocessor for preprocessing of raw sensor data prior to processing by a subsequent compute unit; and a neural network processor for drawing inferences from either the raw sensor data or preprocessed sensor data." Para. 87 see "The output from the neural network processing may include an inference (311)."). and perform the one or more post-processing operations using the object detection output. (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data. For example, processing of image sensor data may produce an inference from the captured images about the scene observed by the sensor… An annotation combines sensor data with an inference. For example, for image sensor data, an image captured by the sensor may be annotated with an inference by modifying pixel data. The modified pixel data may be a highlight or brightened pixels in regions that are higher priority, while deprioritized regions may be dimmed, and excluded regions may be black."). Regarding claim 2, Senthil teaches The device of claim 1, wherein the sensor data includes at least one sensor image associated with a first pixel size, and wherein the one or more processors, to modify the one or more pre-processing operations, are configured to: cause the one or more pre-processing operations to include mapping points from the at least one sensor image to a grid having a second pixel size. (Para. 96 see "Preprocessing to change image resolution may be controlled by the priorities, such that higher priority portions of sensor data are kept at a higher resolution, while lower priority portions are reduced to a lower resolution." Para. 121 see "selection of which regions of sensor data are reduced in resolution may be based on prior processing that determined which regions would benefit from processing at a higher resolutions... Higher priory regions may be processed at a higher resolution, while lower priority regions are processed at lower resolutions."). Regarding claim 3, Senthil teaches The device of claim 2, wherein the second pixel size is greater than the first pixel size. (Para. 89 see "Such preprocessing may include converting a data format, adjusting the amount of data by reducing a resolution or subsampling techniques, performing statistical analysis, or heuristic analysis of the raw data." (Examiner note: Reducing the resolution is effectively increasing pixel size).). Regarding claim 4, Senthil teaches The device of claim 2, wherein the one or more processors, to perform the one or more pre-processing operations, are configured to: map the points from the at least one sensor image to the grid having the second pixel size; and provide the grid as the input to the neural network. (Para. 96 see "Preprocessing to change image resolution may be controlled by the priorities, such that higher priority portions of sensor data are kept at a higher resolution, while lower priority portions are reduced to a lower resolution. This may allow subsequent processing, such as by a neural network, to operate on different amounts of data." Para. 121 see "selection of which regions of sensor data are reduced in resolution may be based on prior processing that determined which regions would benefit from processing at a higher resolutions... Higher priory regions may be processed at a higher resolution, while lower priority regions are processed at lower resolutions." Para. 89 see "Such preprocessing may include converting a data format, adjusting the amount of data by reducing a resolution or subsampling techniques, performing statistical analysis, or heuristic analysis of the raw data." (Examiner note: Reducing the resolution is effectively increasing pixel size. A digital image is a representation of a continuous spatial signal, subsampling is by defintion a mapping function).). Regarding claim 11, Senthil teaches A device, comprising: one or more memories; and one or more processors, coupled to the one or more memories, configured to: (Para. 60 see "Smart sensor methods and systems are described herein… device includes a sensor, a memory, a network connection, and two processing units." Para. 65 see "The smart sensor system 200 includes an image sensor 202, storage memory 206, one or more compute units 204 and an interface device 208. All elements 202-208 may be in communication with each other such as via data bus 210. Storage memory 206 may comprise separate memories 206a, 206b that may be physical memories, or may be logically separate portions of memory 206. Compute unit(s) 204 may each perform discrete processing operations."). obtain sensor data associated with identifying measured properties of at least one object in an environment, (Para. 61 see "a sensor device 102 may sense an area 120, process the sensed data... The sensor device 102 may include a camera for capturing images of a scene such as area 120." Para. 63 see "an object recognition system for recognizing objects in area 120." Para. 71 see "sensor data may be annotated with the results of the processing of sensor data. For example, processing of image sensor data may produce an inference from the captured images about the scene observed by the sensor. Such an inference may include as identification of a region in an image that contains a moving object, or the inference may identify of the type of object that is moving."). wherein the sensor data is associated with a sensor image having a first pixel size; map data points indicated by the sensor data to a grid having a second pixel size; (Para. 96 see "Preprocessing to change image resolution may be controlled by the priorities, such that higher priority portions of sensor data are kept at a higher resolution, while lower priority portions are reduced to a lower resolution." Para. 121 see "selection of which regions of sensor data are reduced in resolution may be based on prior processing that determined which regions would benefit from processing at a higher resolutions... Higher priory regions may be processed at a higher resolution, while lower priority regions are processed at lower resolutions."). and generate an object detection output for the at least one object based at least in part on detecting the at least one object using the grid as input to a neural network. (Para. 68 see "the preprocessor may convert the raw sensor data from a raw image format to a different format that is appropriate for consumption by the analysis unit. Non-limiting examples of preprocessing for raw image sensor data might include changing the bit depth of image pixel values, converting the color space of the raw image sensor data, and/or analyzing data to determine heuristics." Para. 89 see "the processing of data immediately prior to analysis by a neural network, as in boxes 409, 411 may include preprocessing for the purpose of consumption by the neural network. Such preprocessing may include converting a data format, adjusting the amount of data by reducing a resolution or subsampling techniques, performing statistical analysis, or heuristic analysis of the raw data." Para. 66 see "a control unit to generate control signals that control the sensor and the various other compute units; a preprocessor for preprocessing of raw sensor data prior to processing by a subsequent compute unit; and a neural network processor for drawing inferences from either the raw sensor data or preprocessed sensor data." Para. 87 see "The output from the neural network processing may include an inference (311)."). Regarding claim 12, Senthil teaches The device of claim 11, wherein the one or more processors are further configured to: detect a trigger event associated with at least one of the environment or the device, (Para. 67 see "The first compute unit may compare current image data in memory 206a to reference image data previous captured by the sensor stored in memory 206b to generate a control signal for controlling other compute units. For example, if the current data is within a threshold difference of the reference data, a second compute unit may be disabled and may not processes the current data. Alternately, if the comparison is above the threshold, the second compute unit may be enabled and may processes the current data to generate current sensor output data."). wherein mapping the data points indicated by the sensor data to the grid having the second pixel size is based at least in part on detecting the trigger event. (Para. 96 see "Preprocessing to change image resolution may be controlled by the priorities, such that higher priority portions of sensor data are kept at a higher resolution, while lower priority portions are reduced to a lower resolution. This may allow subsequent processing, such as by a neural network, to operate on different amounts of data." Para. 121 see "selection of which regions of sensor data are reduced in resolution may be based on prior processing that determined which regions would benefit from processing at a higher resolutions... Higher priory regions may be processed at a higher resolution, while lower priority regions are processed at lower resolutions." Para. 89 see "Such preprocessing may include converting a data format, adjusting the amount of data by reducing a resolution or subsampling techniques, performing statistical analysis, or heuristic analysis of the raw data." (Examiner note: Reducing the resolution is effectively increasing pixel size. A digital image is a representation of a continuous spatial signal, subsampling is by defintion a mapping function).). Regarding claim 14, Senthil teaches The device of claim 11, wherein the second pixel size is greater than the first pixel size. (Para. 89 see "Such preprocessing may include converting a data format, adjusting the amount of data by reducing a resolution or subsampling techniques, performing statistical analysis, or heuristic analysis of the raw data." (Examiner note: Reducing the resolution is effectively increasing pixel size).). Regarding claim 15, Senthil teaches The device of claim 11, wherein the one or more processors are further configured to: perform one or more post-processing operations using the object detection output. (Para. 67 see "The first compute unit may compare current image data in memory 206a to reference image data previous captured by the sensor stored in memory 206b to generate a control signal for controlling other compute units. For example, if the current data is within a threshold difference of the reference data, a second compute unit may be disabled and may not processes the current data. Alternately, if the comparison is above the threshold, the second compute unit may be enabled and may processes the current data to generate current sensor output data." Para. 94 see "the control signals for controlling processing of image data (such as by box 609 of FIG. 6A) may include priorities for discrete portions of data instead of a simple enable/disable control per portion. The priorities may then be used to control subsequent processing." Para. 96 see "per-portion priorities may control other aspects of the processing."). Claim 20 is rejected under the same analysis as claim 1 above. Claim 21 is rejected under the same analysis as claim 2 above. Claim 22 is rejected under the same analysis as claim 3 above. Claim 23 is rejected under the same analysis as claim 4 above. Claim 26 is rejected under the same analysis as claim 11 above. Claim 27 is rejected under the same analysis as claim 12 above. Claim 28 is rejected under the same analysis as claim 15 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-7, 16-18, 24-25, 29 are rejected under 35 U.S.C. 103 as being unpatentable over Senthil et al. (US 20230116538 A1), hereinafter Senthil, in view of Wang et al. (US 10732261 B1), hereinafter Wang. Regarding claim 5, Senthil teaches The device of claim 1. wherein the object detection output of the neural network includes a bounding region that identifies a location of the at least one object, (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data. For example, processing of image sensor data may produce an inference from the captured images about the scene observed by the sensor. Such an inference may include as identification of a region in an image that contains a moving object, or the inference may identify of the type of object that is moving."). and wherein the one or more processors, to perform the one or more post-processing operations, are configured to: determine, based at least in part on modifying the one or more post-processing operations, (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data.""). Senthil does not teach wherein the location of the at least one object is not associated with an object indication as indicated by the sensor data, one or more property values associated with the at least one object based at least in part on at least one of property values of point cloud data associated with the location as indicated by the sensor data or property values of one or more other objects indicated by the sensor data. However, Wang teaches wherein the location of the at least one object is not associated with an object indication as indicated by the sensor data, (Col. 15, Ln. 64 - Col. 16, Ln. 2 see "the feature extractor 610 may generate or form input data of the machine learning model which has been learned or trained. In some implementations, radar points that are not associated with tracks from the track manager 512 (“unassociated points”) can be used to generate proposals 636, which feed to the track manager 512." Col. 21, Ln. 22-26 see "The track manager 512 may add new track data to the memory when receiving new track data of a newly detected object from various sources. For example, the track manager 512 can receive new track data from the radar tracker 517 (e.g., via the proposal 636)." Col. 21 Ln. 33-34 see "the position information of a track of the target object may include position information." Col. 27, Ln. 37-35 see "the detector 550 may detect a new object based on unassociated detections or in response to the radar spawning new tracks. The radar tracker 517 (see FIG. 5 and FIG. 6) may propose and send a track of a new object (e.g., proposal 636 in FIG. 6) to the track combiner 606 via the track updater 604 (see FIG. 6). The track combiner 606 may combine track data of an existing track (e.g., track data) with new track data of the newly detected object." (Examiner note: The location of the object is not indicated by the sensor data but the neural network can be more sensitive and create a detection and add it to the track manager, indicating an objects position.)). one or more property values associated with the at least one object based at least in part on at least one of property values of point cloud data associated with the location as indicated by the sensor data or property values of one or more other objects indicated by the sensor data. (Col. 3, Ln. 62-63 see "the radar measurement data associated with an object may be a radar point cloud." Col. 18, Ln. 5-11 see "the associator 634 can determine particular radar points from the radar points 631 based on the predicted state data 659 of the predicted track of the object. In this example, the associator 634 can selectively extract associated radar points from the radar points 631 based on the predicted state data 659 of the predicted track of the object." Col. 18, Ln. 12-16 see "the associator 634 may generate, as portion of the associated data 635(t.sub.2), updated track data of the object at time t.sub.2, by updating the track 655(t.sub.1) based on the radar points 631(t.sub.2) and the predicted state data 659(t.sub.2) of the predicted track of the object."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to identify a location of an object not associated with an object indication and to determine one or more property values associated with objects using point cloud data. Doing so would predictably increase detection rates and reliability of the system by allowing another detection source, such as a neural network, to detect objects that were not detected by the sensor and join detection data by multiple sources. Additionally, this would increase safety of autonomous vehicles by having a more reliable detection system. Regarding claim 6, Senthil in view of Wang teaches The device of claim 5. Senthil does not teach wherein the one or more property values include at least one of: an absolute velocity, or an acceleration. However, Wang teaches wherein the one or more property values include at least one of: an absolute velocity, or an acceleration. (Col. 18, Ln. 12-16 see "the associator 634 may generate, as portion of the associated data 635(t.sub.2), updated track data of the object at time t.sub.2, by updating the track 655(t.sub.1) based on the radar points 631(t.sub.2) and the predicted state data 659(t.sub.2) of the predicted track of the object." Col. 14, Ln. 47-49 see "the radar tracker 517 determines tracks of different objects (e.g., present position and velocity of different objects)." (Examiner note: the track data contains the velocity of the object.)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to determine an absolute velocity of a detected object in post-processing. Doing so would predictably increase the safety of the system by knowing the absolute velocity which can be used in predictive calculations. Examples might be calculating the amount of time until collsion or predicting the location of the object in a given amount of time (basic physics calculations). In a case where this device is part of an autonomous vehicle, the vehicle may steer away from areas that would be unsafe to travel. Regarding claim 7, Senthil in view of Wang teaches The device of claim 5. Senthil does not teach wherein the one or more processors, to determine the one or more property values associated with the at least one object, are configured to: determine an absolute velocity of the at least one object based at least in part on a relative velocity of the point cloud data associated with the location and an ego velocity. However, Wang teaches wherein the one or more processors, to determine the one or more property values associated with the at least one object, are configured to: determine an absolute velocity of the at least one object based at least in part on a relative velocity of the point cloud data associated with the location and an ego velocity. (Col. 4, Ln. 50 see "models such as motion models or dynamic models can be used to obtain motion estimates of a tracked object, thereby allowing the radar observation model to better interpret the information in input radar points." Col. 18, Ln. 51-54 see "in addition to an output of a single point representing the new track, the track data 639 may include additional information associated with the single point, e.g., range and range rate of the radar at the single point." (Examiner note: motion estimation gives the absolute velocity. Range rate is the relative velocity of the point cloud, or more specifically, the rate of change of distance (velocity) between the sensor and a target (the location)).). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to determine an absolute velocity of an object based on relative velocity, location, and ego velocity in post-processing. Doing so would predictably increase the safety of the system by knowing the absolute velocity which can be used in predictive calculations. Examples might be calculating the amount of time until collsion or predicting the location of the object in a given amount of time (basic physics calculations). In a case where this device is part of an autonomous vehicle, the vehicle may steer away from areas that would be unsafe to travel. Regarding claim 16, Senthil teaches The device of claim 15. wherein the object detection output of the neural network includes a bounding region that identifies a location of the at least one object, (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data. For example, processing of image sensor data may produce an inference from the captured images about the scene observed by the sensor. Such an inference may include as identification of a region in an image that contains a moving object, or the inference may identify of the type of object that is moving."). and wherein the one or more processors, to perform the one or more post-processing operations, are configured to: (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data.""). Senthil does not teach wherein the location of the at least one object is not associated with an object indication as indicated by the sensor data, determine one or more property values associated with the at least one object based at least in part on at least one of property values of point cloud data associated with the location as indicated by the sensor data or property values of one or more other objects indicated by the sensor data. However, Wang teaches wherein the location of the at least one object is not associated with an object indication as indicated by the sensor data, (Col. 15, Ln. 64 - Col. 16, Ln. 2 see "the feature extractor 610 may generate or form input data of the machine learning model which has been learned or trained. In some implementations, radar points that are not associated with tracks from the track manager 512 (“unassociated points”) can be used to generate proposals 636, which feed to the track manager 512." Col. 21, Ln. 22-26 see "The track manager 512 may add new track data to the memory when receiving new track data of a newly detected object from various sources. For example, the track manager 512 can receive new track data from the radar tracker 517 (e.g., via the proposal 636)." Col. 21 Ln. 33-34 see "the position information of a track of the target object may include position information." Col. 27, Ln. 37-35 see "the detector 550 may detect a new object based on unassociated detections or in response to the radar spawning new tracks. The radar tracker 517 (see FIG. 5 and FIG. 6) may propose and send a track of a new object (e.g., proposal 636 in FIG. 6) to the track combiner 606 via the track updater 604 (see FIG. 6). The track combiner 606 may combine track data of an existing track (e.g., track data) with new track data of the newly detected object." (Examiner note: The location of the object is not indicated by the sensor data but the neural network can be more sensitive and create a detection and add it to the track manager, indicating an objects position.)). determine one or more property values associated with the at least one object based at least in part on at least one of property values of point cloud data associated with the location as indicated by the sensor data or property values of one or more other objects indicated by the sensor data. (Col. 3, Ln. 62-63 see "the radar measurement data associated with an object may be a radar point cloud." Col. 18, Ln. 5-11 see "the associator 634 can determine particular radar points from the radar points 631 based on the predicted state data 659 of the predicted track of the object. In this example, the associator 634 can selectively extract associated radar points from the radar points 631 based on the predicted state data 659 of the predicted track of the object." Col. 18, Ln. 12-16 see "the associator 634 may generate, as portion of the associated data 635(t.sub.2), updated track data of the object at time t.sub.2, by updating the track 655(t.sub.1) based on the radar points 631(t.sub.2) and the predicted state data 659(t.sub.2) of the predicted track of the object."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to identify a location of an object not associated with an object indication and to determine one or more property values associated with objects using point cloud data. Doing so would predictably increase detection rates and reliability of the system by allowing another detection source, such as a neural network, to detect objects that were not detected by the sensor and join detection data by multiple sources. Additionally, this would increase safety of autonomous vehicles by having a more reliable detection system. Regarding claim 17, Senthil in view of Wang teaches The device of claim 16. Senthil does not teach wherein the one or more property values include at least one of: an absolute velocity, or an acceleration. However, Wang teaches wherein the one or more property values include at least one of: an absolute velocity, or an acceleration. (Col. 18, Ln. 12-16 see "the associator 634 may generate, as portion of the associated data 635(t.sub.2), updated track data of the object at time t.sub.2, by updating the track 655(t.sub.1) based on the radar points 631(t.sub.2) and the predicted state data 659(t.sub.2) of the predicted track of the object." Col. 14, Ln. 47-49 see "the radar tracker 517 determines tracks of different objects (e.g., present position and velocity of different objects)." (Examiner note: the track data contains the velocity of the object.)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to determine an absolute velocity of a detected object in post-processing. Doing so would predictably increase the safety of the system by knowing the absolute velocity which can be used in predictive calculations. Examples might be calculating the amount of time until collsion or predicting the location of the object in a given amount of time (basic physics calculations). In a case where this device is part of an autonomous vehicle, the vehicle may steer away from areas that would be unsafe to travel. Regarding claim 18, Senthil in view of Wang teaches The device of claim 16. Senthil does not teach wherein the one or more processors, to determine the one or more property values associated with the at least one object, are configured to: determine an absolute velocity of the at least one object based at least in part on a relative velocity of the point cloud data associated with the location and an ego velocity. However, Wang teaches wherein the one or more processors, to determine the one or more property values associated with the at least one object, are configured to: determine an absolute velocity of the at least one object based at least in part on a relative velocity of the point cloud data associated with the location and an ego velocity. (Col. 4, Ln. 50 see "models such as motion models or dynamic models can be used to obtain motion estimates of a tracked object, thereby allowing the radar observation model to better interpret the information in input radar points." Col. 18, Ln. 51-54 see "in addition to an output of a single point representing the new track, the track data 639 may include additional information associated with the single point, e.g., range and range rate of the radar at the single point." (Examiner note: motion estimation gives the absolute velocity. Range rate is the relative velocity of the point cloud, or more specifically, the rate of change of distance (velocity) between the sensor and a target (the location)).). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to determine an absolute velocity of an object based on relative velocity, location, and ego velocity in post-processing. Doing so would predictably increase the safety of the system by knowing the absolute velocity which can be used in predictive calculations. Examples might be calculating the amount of time until collsion or predicting the location of the object in a given amount of time (basic physics calculations). In a case where this device is part of an autonomous vehicle, the vehicle may steer away from areas that would be unsafe to travel. Claim 24 is rejected under the same analysis as claim 5 above. Claim 25 is rejected under the same analysis as claim 7 above. Claim 29 is rejected under the same analysis as claim 16 above. Claims 8, 19, 30 are rejected under 35 U.S.C. 103 as being unpatentable over Senthil et al. (US 20230116538 A1), hereinafter Senthil, in view of Wang et al. (US 10732261 B1), hereinafter Wang, and Vora et al. (US 20220383640 A1), hereinafter Vora. Regarding claim 8, Senthil teaches The device of claim 1. wherein the object detection output of the neural network includes a bounding region that identifies a location of the at least one object, (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data. For example, processing of image sensor data may produce an inference from the captured images about the scene observed by the sensor. Such an inference may include as identification of a region in an image that contains a moving object, or the inference may identify of the type of object that is moving."). and wherein the one or more processors, to perform the one or more post-processing operations, are configured to: modify, based at least in part on modifying the one or more post-processing operations, (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data."). Senthil does not teach wherein the location of the at least one object is not associated with an object indication as indicated by the sensor data, a classification confidence score of the object detection output based at least in part on the location of the at least one object not being associated with the object indication. However, Wang teaches wherein the location of the at least one object is not associated with an object indication as indicated by the sensor data, (Col. 15, Ln. 64 - Col. 16, Ln. 2 see "the feature extractor 610 may generate or form input data of the machine learning model which has been learned or trained. In some implementations, radar points that are not associated with tracks from the track manager 512 (“unassociated points”) can be used to generate proposals 636, which feed to the track manager 512." Col. 21, Ln. 22-26 see "The track manager 512 may add new track data to the memory when receiving new track data of a newly detected object from various sources. For example, the track manager 512 can receive new track data from the radar tracker 517 (e.g., via the proposal 636)." Col. 21 Ln. 33-34 see "the position information of a track of the target object may include position information." (Examiner note: The location of the object is not indicated by the sensor data but the neural network can be more sensitive and create a detection and add it to the track manager, indicating an objects position.)). based at least in part on the location of the at least one object not being associated with the object indication. (Col. 15, Ln. 64 - Col. 16, Ln. 2 see "the feature extractor 610 may generate or form input data of the machine learning model which has been learned or trained. In some implementations, radar points that are not associated with tracks from the track manager 512 (“unassociated points”) can be used to generate proposals 636, which feed to the track manager 512." Col. 21, Ln. 22-26 see "The track manager 512 may add new track data to the memory when receiving new track data of a newly detected object from various sources. For example, the track manager 512 can receive new track data from the radar tracker 517 (e.g., via the proposal 636)." Col. 21 Ln. 33-34 see "the position information of a track of the target object may include position information." (Examiner note: The location of the object is not indicated by the sensor data but the neural network can be more sensitive and create a detection and add it to the track manager, indicating an objects position.)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Wang to identify a location of an object not associated with an object indication. Doing so would predictably increase detection rates and reliability of the system by allowing another detection source, such as a neural network, to detect objects that were not detected by the sensor and join detection data by multiple sources. Additionally, this would increase safety of autonomous vehicles by having a more reliable detection system. Furthermore, Vora teaches a classification confidence score of the object detection output (Para. 138 see "the object detection neural network 722 is a feed-forward convolutional neural network that, given the output 720 from the backbone neural network 718, generates a set of bounding boxes for potential objects in the 3D space and classification scores for the presence of object class instances (e.g., cars, pedestrians, or bikes) in these bounding boxes. The higher the classification score, the more likely the corresponding object class instance is present in a box."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil with Wang to incorporate the teachings of Vora to use the location of the object to calculate a classification confidence score and modify or combine the data. Doing so would predictably increase the accuracy of detection and classification of objects by allowing the post-processing detection to modify or update the existing detection data. This is because during post-processing, the point cloud data may be changed to be more accurate and therefore a more accurate classification score can be calculated. Regarding claim 19, Senthil in view of Wang teaches The device of claim 16. In addition, Senthil teaches wherein the one or more processors, to perform the one or more post-processing operations, are configured to: (Para. 71 see "sensor data may be annotated with the results of the processing of sensor data."). Senthil does not teach modify a classification confidence score of the object detection output based at least in part on the location of the at least one object not being associated with the object indication. However, Wang teaches based at least in part on the location of the at least one object not being associated with the object indication. (Col. 15, Ln. 64 - Col. 16, Ln. 2 see "the feature extractor 610 may generate or form input data of the machine learning model which has been learned or trained. In some implementations, radar points that are not associated with tracks from the track manager 512 (“unassociated points”) can be used to generate proposals 636, which feed to the track manager 512." Col. 21, Ln. 22-26 see "The track manager 512 may add new track data to the memory when receiving new track data of a newly detected object from various sources. For example, the track manager 512 can receive new track data from the radar tracker 517 (e.g., via the proposal 636)." Col. 21 Ln. 33-34 see "the position information of a track of the target object may include position information." (Examiner note: The location of the object is not indicated by the sensor data but the neural network can be more sensitive and create a detection and add it to the track manager, indicating an objects position.)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil with Wang to incorporate the teachings of Wang to identify a location of an object not associated with an object indication. Doing so would predictably increase detection rates and reliability of the system by allowing another detection source, such as a neural network, to detect objects that were not detected by the sensor and join detection data by multiple sources. Additionally, this would increase safety of autonomous vehicles by having a more reliable detection system. Furthermore, Vora teaches modify a classification confidence score of the object detection output (Para. 138 see "the object detection neural network 722 is a feed-forward convolutional neural network that, given the output 720 from the backbone neural network 718, generates a set of bounding boxes for potential objects in the 3D space and classification scores for the presence of object class instances (e.g., cars, pedestrians, or bikes) in these bounding boxes. The higher the classification score, the more likely the corresponding object class instance is present in a box."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil with Wang to incorporate the teachings of Vora to use the location of the object to calculate a classification confidence score and modify or combine the data. Doing so would predictably increase the accuracy of detection and classification of objects by allowing the post-processing detection to modify or update the existing detection data. This is because during post-processing, the point cloud data may be changed to be more accurate and therefore a more accurate classification score can be calculated. Claim 30 is rejected under the same analysis as claim 19 above. Claims 9, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Senthil et al. (US 20230116538 A1), hereinafter Senthil, in view of Vora et al. (US 20220383640 A1), hereinafter Vora. Regarding claim 9, Senthil teaches The device of claim 1. Senthil does not teach wherein the trigger event is based at least in part on at least one of: a velocity associated with the device, a sensor type or sensor configuration associated with the sensor data, a vehicle type associated with the device, or a quantity of objects detected in the environment. However, Vora teaches wherein the trigger event is based at least in part on at least one of: a velocity associated with the device, a sensor type or sensor configuration associated with the sensor data, a vehicle type associated with the device, or a quantity of objects detected in the environment. (Para. 120 see "In some embodiments, the first threshold value P and the second threshold value N are adaptive values. In particular, based on a density of the objects in the 3D space, the pillar creating component 706 can adjust P and/or N such that there are more pillars and/or more data points allowed in each pillar in the region of high object density, less pillars and/or less data points in each pillar in the region of low object density, and no pillars in the region of no objects."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Vora to base a trigger event on the quantity of objects detected in the environment. Doing so would predictably increase flexibilty of the system thereby enabling higher increased performance based on various senarios encountered by the system. Processing data based on the number of objects detected nearby may allow an autonomous vehicle to take more safety measures in environments where there are many objects nearby, thereby increasing safety of passengers. Regarding claim 13, Senthil teaches The device of claim 12. Senthil does not teach wherein the trigger event is based at least in part on at least one of: a velocity associated with the device, a sensor type or sensor configuration associated with the sensor data, a vehicle type associated with the device, or a quantity of objects detected in the environment. However, Vora teaches wherein the trigger event is based at least in part on at least one of: a velocity associated with the device, a sensor type or sensor configuration associated with the sensor data, a vehicle type associated with the device, or a quantity of objects detected in the environment. (Para. 120 see "In some embodiments, the first threshold value P and the second threshold value N are adaptive values. In particular, based on a density of the objects in the 3D space, the pillar creating component 706 can adjust P and/or N such that there are more pillars and/or more data points allowed in each pillar in the region of high object density, less pillars and/or less data points in each pillar in the region of low object density, and no pillars in the region of no objects."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Vora to base a trigger event on the quantity of objects detected in the environment. Doing so would predictably increase flexibilty of the system thereby enabling higher increased performance based on various senarios encountered by the system. Processing data based on the number of objects detected nearby may allow an autonomous vehicle to take more safety measures in environments where there are many objects nearby, thereby increasing safety of passengers. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Senthil et al. (US 20230116538 A1), hereinafter Senthil, in view of Deng et al. (US 20210241026 A1), hereinafter Deng. Regarding claim 10, Senthil teaches The device of claim 1. Senthil does not teach wherein the one or more processors, to perform the one or more pre-processing operations, are configured to: determine a lateral velocity associated with the at least one object; and provide, as a feature of the input to the neural network, the lateral velocity or a combination of the lateral velocity and a longitudinal velocity associated with the at least one object. However, Deng teaches wherein the one or more processors, to perform the one or more pre-processing operations, are configured to: determine a lateral velocity associated with the at least one object; (Para. 117 see "As for the imaging RADAR point cloud 808, a detected point of an object has location (x,y) and velocity (v) at a certain heading. Here the term “heading” is defined as the angle between the direction the is pointing and the horizontal x-axis on a bird's eye view plane. The Doppler velocity of this point is calculated by Equation 1 as follows:" (Examiner note: this describes calculating the lateral and longitudinal velocities).). and provide, as a feature of the input to the neural network, the lateral velocity or a combination of the lateral velocity and a longitudinal velocity associated with the at least one object. (Para. 121 see "The image feature maps 932, the LiDAR feature maps 936, and the imaging RADAR feature maps 940 are feature vectors which are each input to the deep latent ensemble layer 944 for further processing. According to embodiments of the present disclosure, the feature vectors of the image feature maps 932, the LiDAR feature maps 936, and the imaging RADAR feature maps 940 have the same dimension. For example, the output of the last layer of the feature extractors for each of the feature extractors 916, 924, and 928 should result in a 3D feature tensor (e.g., matrix) which has the same dimension W×L×(the number of channels). The width (W) and the length (L) is the dimension from the top view of the detected 3D space and the number of channels is the information of the position, intensity and learnt global semantic features (for RADAR it also includes range rate information)." (Examiner note: the range rate includes the velocity calculated prior to inputting the radar channel into the neural network.)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Senthil to incorporate the teachings of Deng to calculate the lateral velocity of a detected object and input it into a neural network as a feature. Doing so would predictably increase the accuracy of the neural network's output by including the velocity of the object so that the neural network weighs the movement of the object in its calculations which may be a feature of a type of object. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Akbarzadeh et al. (US 20210063199 A1) discloses a system that uses sensor fusion for autonomous vehicles and processes data with a neural network to detect objects in an environment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER J VAUGHN whose telephone number is (571) 272-5253. The examiner can normally be reached M-F 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW MOYER can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER JOSEPH VAUGHN/Examiner, Art Unit 2675 /EDWARD PARK/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Dec 06, 2023
Application Filed
Dec 31, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591955
SYSTEMS AND METHODS FOR GENERATING DYNAMIC DARK CURRENT IMAGES
2y 5m to grant Granted Mar 31, 2026
Patent 12579756
GRAPHICAL ASSISTANCE WITH TASKS USING AN AR WEARABLE DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573010
IMAGE PROCESSING APPARATUS, RADIATION IMAGING SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567265
VEHICLE, CONTROL METHOD THEREOF AND CAMERA MONITORING APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12521061
Method of Determining the Effectiveness of a Treatment on a Face
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+28.6%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month