Prosecution Insights
Last updated: April 19, 2026
Application No. 18/473,690

DETECTING TARGET VEHICLES USING NON-STANDARD FEATURES

Non-Final OA §102§103§112
Filed
Sep 25, 2023
Examiner
SORRIN, AARON JOSEPH
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Panasonic Automotive Systems Company Of America Division Of Panasonic Corporation Of North America
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
46 granted / 62 resolved
+12.2% vs TC avg
Strong +51% interview lift
Without
With
+50.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
22 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
20.4%
-19.6% vs TC avg
§103
35.6%
-4.4% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
29.3%
-10.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 9 objected to because of the following informalities: Claim 9 recites “comprising sensor comprises” which should recite “wherein the sensor comprises”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “a sensor” in two instances. There is insufficient antecedent basis for this limitation in the claim. The latter instance is being interpreted as ‘the sensor’. Claims 2-10 are rejected as dependent on claim 1. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 4, 5, 6, 11, 12, 14, 17, 19, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Khosla (US 20230306832 A1). Regarding claim 1, Khosla teaches “A system in a vehicle for detecting target vehicles comprising: a sensor installed on the vehicle to sense light from outside the vehicle; a transceiver installed in the vehicle, the transceiver to receive a detection request and transmit a detection output; (Khosla, Paragraph 20, “Techniques are discussed herein for monitoring one or more vehicles with one or more entities such as one or more other vehicles and/or one or more devices associated with (e.g., disposed in) the monitored vehicle(s). For example, techniques are discussed for alerting one or more network entities, e.g., associated with government authorities and/or private-service providers, to monitor one or more private and/or public vehicles for which monitoring has been requested (e.g., for which a monitoring alert has been issued). For example, an ADAS (Advanced Driver Assistance System) unit of a vehicle may receive camera feeds from various cameras, e.g., front-facing, rear-facing, and side-facing cameras. A monitor vehicle (e.g., the ADAS, and/or a separate processor of the vehicle) may analyze one or more images from the camera(s) to decipher text, e.g., on road signs, and/or to determine one or more other identifying characteristics of another vehicle. The monitor vehicle may be configured to analyze one or more images to decipher text (and possibly images) of license plates. The monitor vehicle may be configured to determine the identifying characteristic(s) and to match/correlate the identifying characteristic(s) with a target vehicle in response to receiving a trigger to do so, e.g., a monitor alert issued by a service provider, a government agency, etc. for the target vehicle. The monitor alert may be unicast to the monitor vehicle, unicast to potential monitor vehicles, broadcast, etc. The monitor vehicle may report the presence of the target vehicle, e.g., to a server in the cloud. One or more monitor vehicles may broadcast messages, e.g., C-V2X SDSMs (Cellular Vehicle-to-Everything Sensor Data Sharing Messages) including one or more characteristics of the target vehicle (e.g., location, license plate number, vehicle make, vehicle model, vehicle color, etc.). A server, e.g., disposed in the cloud, may collect messages from one or more monitor vehicles over time to determine movements of the target vehicle, and may provide information to one or more other entities, e.g., an owner of the target vehicle, law enforcement, etc. Other configurations, however, may be used.”; Paragraph 60, “The transceiver 215 may include a wireless transceiver 240 and a wired transceiver 250 configured to communicate with other devices through wireless connections and wired connections, respectively. For example, the wireless transceiver 240 may include a wireless transmitter 242 and a wireless receiver 244 coupled to an antenna 246 for transmitting (e.g., on one or more uplink channels and/or one or more sidelink channels) and/or receiving (e.g., on one or more downlink channels and/or one or more sidelink channels) wireless signals 248 and transducing signals from the wireless signals 248 to wired (e.g., electrical and/or optical) signals and from wired (e.g., electrical and/or optical) signals to the wireless signals 248 … The wireless transmitter 242, the wireless receiver 244, and/or the antenna 246 may include multiple transmitters, multiple receivers, and/or multiple antennas, respectively, for sending and/or receiving, respectively, appropriate signals.”) “and a detection system installed in the vehicle, the detection system comprising a processor to: receive the detection request from the transceiver, wherein the detection request comprises non-standard features;” (Khosla, Paragraph 3, “An example monitor vehicle includes: a transceiver; a memory; and a processor, communicatively coupled to the memory and the transceiver, configured to: receive, via the transceiver, a trigger to provide a report relating to a target vehicle, the trigger including identifying information of the target vehicle; obtain first information wirelessly that at least partially identifies the target vehicle; determine that the first information corresponds to the identifying information of the target vehicle; and report, via the transceiver based on receipt of the trigger and determining that the first information corresponds to the identifying information of the target vehicle, second information indicating presence of the target vehicle and a location associated with the target vehicle.”; Paragraph 83, “The monitor-enabling/activation messages 815, 816 include identifying information for the target vehicle 730, and may include a vehicle ID, e.g., a temporary ID assigned to the target vehicle 730, and identifying information including one or more characteristics of the target vehicle 730 to assist in determining that a particular vehicle is the target vehicle. For example, referring also to FIG. 9, an example monitor-enabling/activation message 90 includes a vehicle ID field 910, and an identifying information field 920. The vehicle ID field 910 may be a temporary identity assigned to the target vehicle 730, e.g., randomly assigned and effective until monitoring is no longer desired, or a temporary ID corresponding to a BSM (basic safety message) broadcast by the target vehicle 730 using C-V2X technology, or another ID. The identifying information field 920 may include the license plate number (or partial license plate number) of the target vehicle 730 and/or may include the vehicle make (manufacturer), and possibly the vehicle model and/or color, of the target vehicle 730. Also or alternatively, the identifying information field 920 may include distinguishing markings (e.g., lettering on a side panel (e.g., a door) of the target vehicle 730 (such as a business name, phone number, etc.), damage to the target vehicle, etc.), or a sound associated with the target vehicle 730 (e.g., a sound associated with a particular make and/or model of motorcycle). Still other identifying information may also or alternatively be provided in the identifying information field 920.”) “receive an image from a sensor; execute a search on the image using the non-standard features in response to receiving the detection request;” (Khosla, Paragraph 20, “For example, an ADAS (Advanced Driver Assistance System) unit of a vehicle may receive camera feeds from various cameras, e.g., front-facing, rear-facing, and side-facing cameras. A monitor vehicle (e.g., the ADAS, and/or a separate processor of the vehicle) may analyze one or more images from the camera(s) to decipher text, e.g., on road signs, and/or to determine one or more other identifying characteristics of another vehicle. The monitor vehicle may be configured to analyze one or more images to decipher text (and possibly images) of license plates. The monitor vehicle may be configured to determine the identifying characteristic(s) and to match/correlate the identifying characteristic(s) with a target vehicle in response to receiving a trigger to do so, e.g., a monitor alert issued by a service provider, a government agency, etc. for the target vehicle.”) “and send the detection output corresponding to a detected target vehicle to the transceiver. (Khosla, Paragraph 3, “An example monitor vehicle includes: a transceiver; a memory; and a processor, communicatively coupled to the memory and the transceiver, configured to: receive, via the transceiver, a trigger to provide a report relating to a target vehicle, the trigger including identifying information of the target vehicle; obtain first information wirelessly that at least partially identifies the target vehicle; determine that the first information corresponds to the identifying information of the target vehicle; and report, via the transceiver based on receipt of the trigger and determining that the first information corresponds to the identifying information of the target vehicle, second information indicating presence of the target vehicle and a location associated with the target vehicle.”) Regarding claim 4, Khosla teaches “The system of claim 1,” “wherein the search comprises a description-based search, wherein the processor is to execute the description-based search in response to detecting a text description in the detection request.” (Khosla, Paragraph 20, “Techniques are discussed herein for monitoring one or more vehicles with one or more entities such as one or more other vehicles and/or one or more devices associated with (e.g., disposed in) the monitored vehicle(s). … A monitor vehicle (e.g., the ADAS, and/or a separate processor of the vehicle) may analyze one or more images from the camera(s) to decipher text, e.g., on road signs, and/or to determine one or more other identifying characteristics of another vehicle. The monitor vehicle may be configured to analyze one or more images to decipher text (and possibly images) of license plates. The monitor vehicle may be configured to determine the identifying characteristic(s) and to match/correlate the identifying characteristic(s) with a target vehicle in response to receiving a trigger to do so, e.g., a monitor alert issued by a service provider, a government agency, etc. for the target vehicle. The monitor alert may be unicast to the monitor vehicle, unicast to potential monitor vehicles, broadcast, etc. The monitor vehicle may report the presence of the target vehicle, e.g., to a server in the cloud. One or more monitor vehicles may broadcast messages, e.g., C-V2X SDSMs (Cellular Vehicle-to-Everything Sensor Data Sharing Messages) including one or more characteristics of the target vehicle (e.g., location, license plate number, vehicle make, vehicle model, vehicle color, etc.). A server, e.g., disposed in the cloud, may collect messages from one or more monitor vehicles over time to determine movements of the target vehicle, and may provide information to one or more other entities, e.g., an owner of the target vehicle, law enforcement, etc. Other configurations, however, may be used.”; Figure 9 and Paragraph 83, “The monitor-enabling/activation messages 815, 816 include identifying information for the target vehicle 730, and may include a vehicle ID, e.g., a temporary ID assigned to the target vehicle 730, and identifying information including one or more characteristics of the target vehicle 730 to assist in determining that a particular vehicle is the target vehicle. For example, referring also to FIG. 9, an example monitor-enabling/activation message 90 includes a vehicle ID field 910, and an identifying information field 920. The vehicle ID field 910 may be a temporary identity assigned to the target vehicle 730, e.g., randomly assigned and effective until monitoring is no longer desired, or a temporary ID corresponding to a BSM (basic safety message) broadcast by the target vehicle 730 using C-V2X technology, or another ID. The identifying information field 920 may include the license plate number (or partial license plate number) of the target vehicle 730 and/or may include the vehicle make (manufacturer), and possibly the vehicle model and/or color, of the target vehicle 730. Also or alternatively, the identifying information field 920 may include distinguishing markings (e.g., lettering on a side panel (e.g., a door) of the target vehicle 730 (such as a business name, phone number, etc.), damage to the target vehicle, etc.), or a sound associated with the target vehicle 730 (e.g., a sound associated with a particular make and/or model of motorcycle). Still other identifying information may also or alternatively be provided in the identifying information field 920.”) Regarding claim 6, Khosla teaches “The system of claim 1,” “wherein the detection output further comprises support information.” (Khosla, Figure 10 shows the detection output with location and confidence.) Regarding claim 8, Khosla teaches “The system of claim 1,” “wherein the sensor comprises a camera.” (Khosla, Paragraph 74, Referring also to FIG. 5, a UE 500 includes a processor 510, a transceiver 520, and a memory 530 communicatively coupled to each other by a bus 540. The UE 500 may include one or more sensor(s) 570 (e.g., one or more cameras and/or one or more microphones and/or one or more ranging devices, and/or ultrasound sensors, etc.) connected to the bus 540. The UE 500 may include the components shown in FIG. 5. The UE 500 is a wireless communication device and is part of a monitor vehicle (e.g., car, truck, motorcycle, etc.) that is capable of monitoring another vehicle (e.g., analyzing one or more images of a target vehicle to determine and report one or more characteristics of the target vehicle, and/or receiving and reporting one or more messages from another monitor vehicle regarding the target vehicle, etc.). The UE 500 may include one or more other components such as any of those shown in FIG. 2 such that the UE 200 may be an example of the UE 500. For example, the processor 510 may include one or more of the components of the processor 210. The transceiver 520 may include one or more of the components of the transceiver 215, e.g., the wireless transmitter 242 and the antenna 246, or the wireless receiver 244 and the antenna 246, or the wireless transmitter 242, the wireless receiver 244, and the antenna 246. Also or alternatively, the transceiver 520 may include the wired transmitter 252 and/or the wired receiver 254. The memory 530 may be configured similarly to the memory 211, e.g., including software with processor-readable instructions configured to cause the processor 510 to perform functions.”) Regarding claims 11 and 14, these claims recite a method with steps corresponding to the elements of the system recited in Claims 1 and 2. Therefore, the recited steps of these claims are mapped to the analogous elements in the corresponding system claims. Regarding claim 12, Khosla teaches “The method of claim 11,” “wherein the search is executed locally via a processor of the vehicle.” (Khosla, Paragraph 3, “An example monitor vehicle includes: a transceiver; a memory; and a processor, communicatively coupled to the memory and the transceiver, configured to: receive, via the transceiver, a trigger to provide a report relating to a target vehicle, the trigger including identifying information of the target vehicle; obtain first information wirelessly that at least partially identifies the target vehicle; determine that the first information corresponds to the identifying information of the target vehicle; and report, via the transceiver based on receipt of the trigger and determining that the first information corresponds to the identifying information of the target vehicle, second information indicating presence of the target vehicle and a location associated with the target vehicle.”) Regarding claims 17 and 20, these claims recite a system with elements corresponding to the elements of the system recited in Claims 1 and 2. Therefore, the recited elements of these claims are mapped to the analogous elements in the corresponding system claims. Regarding claim 19, claim 19 recites a system with elements corresponding to the steps recited in Claim 12. Therefore, the recited elements of this claim are mapped to the analogous steps in the corresponding method claim. Additionally, the processor disclosed in the rejection of claim 12 amounts to the system of claim 19. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla in view of Kerecsen (US20200294401A1). Regarding claim 2, Khosla teaches “The system of claim 1,” Khosla does not disclose “wherein the vehicle is to receive the detection request within a geographical area of interest associated with the detection request.” Kerecsen discloses “wherein the vehicle is to receive the detection request within a geographical area of interest associated with the detection request.” (Kerecsen, Paragraph 181, “A system for documenting an accident that includes a vehicle that includes a transceiver device and a processing circuit is disclosed in U.S. Patent Application Publication No. 2015/0145695 to Hyde et al. entitled: “Systems and methods for automatically documenting an accident”, which is incorporated in its entirety for all purposes as if fully set forth herein. The processing circuit is configured to receive data from a collision detection device of the vehicle, determine, based on the received data, that an accident is impending or occurring involving the vehicle, generate a request for a nearby vehicle, and transmit, via the transceiver device, the request to the nearby vehicle. The request is for the nearby vehicle to illuminate a region associated with the accident, actively acquire data related to the accident, and record actively acquired data related to the accident.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use geographical-based detection requests, as taught by Kerecsen, as part of the detection requests of Khosla. The motivation for doing so would have been to avoid sending alerts to vehicles far away from the vehicle target of interest (i.e. notifying someone in Texas that there was just a hit and run in New Jersey). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla with the above teaching of Kerecsen to fully disclose the invention of claim 2. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla in view of Gage (US20180272992A1). Regarding claim 3, Khosla teaches “The system of claim 1,” While Khosla teaches an image-based search in response to the detection request (Khosla, Paragraph 20, “For example, an ADAS (Advanced Driver Assistance System) unit of a vehicle may receive camera feeds from various cameras, e.g., front-facing, rear-facing, and side-facing cameras. A monitor vehicle (e.g., the ADAS, and/or a separate processor of the vehicle) may analyze one or more images from the camera(s) to decipher text, e.g., on road signs, and/or to determine one or more other identifying characteristics of another vehicle. The monitor vehicle may be configured to analyze one or more images to decipher text (and possibly images) of license plates. The monitor vehicle may be configured to determine the identifying characteristic(s) and to match/correlate the identifying characteristic(s) with a target vehicle in response to receiving a trigger to do so, e.g., a monitor alert issued by a service provider, a government agency, etc. for the target vehicle.”), Khosla does not expressly disclose that the detection request comprises an image of a target vehicle. Gage discloses transmission of an image-based vehicle security alerts (Gage, Paragraphs 4 and 52, “An example of a surveillance system that actively classifies security risks and responds proportionally to the security risks is presented herein. In one embodiment, the surveillance system uses a plurality of sensors to detect surveillance events in a surrounding environment of the vehicle and to assess a threat type and a threat level associated with the surveillance events. Accordingly, the surveillance system can actively monitor inputs from the sensors to detect surveillance events that correlate with risks to the vehicle instead of blindly initiating an alarm whenever a sensor is activated.”; “In general, the responses include, for example, sounding an audible alarm, providing an image or video feed to a remote device (e.g., a smartphone or remote monitoring center), providing a user interface (e.g., web-based interface) to a remote device that is a query for additional inputs, logging data about the surveillance event, transmitting a communication to authorities (e.g., police) to alert the authorities, transmitting a beacon to nearby devices connected via the ad-hoc wireless network about the event, autonomously operating the vehicle 100 to drive away or to a pre-programmed location, and so on.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use images, as taught by Gage, as part of the detection requests of Khosla. The motivation for doing so would have been to provide additional information of the target vehicle (i.e. images of the target in addition to the described identifying information). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla with the above teaching of Gage to fully disclose, “wherein the search comprises an imaging-based search, wherein the processor is to execute the imaging-based search in response to detecting an image in the detection request.” Claim(s) 5 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla in view of Wang (A Review of Vehicle Detection Techniques for Intelligent Vehicles) Regarding claim 5, Khosla teaches “The system of claim 1,” While Khosla teaches the detection system and a plurality of input signal modalities (Khosla, Paragraph 74, Referring also to FIG. 5, a UE 500 includes a processor 510, a transceiver 520, and a memory 530 communicatively coupled to each other by a bus 540. The UE 500 may include one or more sensor(s) 570 (e.g., one or more cameras and/or one or more microphones and/or one or more ranging devices, and/or ultrasound sensors, etc.) connected to the bus 540. The UE 500 may include the components shown in FIG. 5. The UE 500 is a wireless communication device and is part of a monitor vehicle (e.g., car, truck, motorcycle, etc.) that is capable of monitoring another vehicle (e.g., analyzing one or more images of a target vehicle to determine and report one or more characteristics of the target vehicle, and/or receiving and reporting one or more messages from another monitor vehicle regarding the target vehicle, etc.). The UE 500 may include one or more other components such as any of those shown in FIG. 2 such that the UE 200 may be an example of the UE 500. For example, the processor 510 may include one or more of the components of the processor 210. The transceiver 520 may include one or more of the components of the transceiver 215, e.g., the wireless transmitter 242 and the antenna 246, or the wireless receiver 244 and the antenna 246, or the wireless transmitter 242, the wireless receiver 244, and the antenna 246. Also or alternatively, the transceiver 520 may include the wired transmitter 252 and/or the wired receiver 254. The memory 530 may be configured similarly to the memory 211, e.g., including software with processor-readable instructions configured to cause the processor 510 to perform functions.”), Khosla does not expressly disclose “wherein the detection system comprises a plurality of pre-trained machine learning models, wherein each of the machine learning models is trained for a particular input signal modality.” Wang discloses “a plurality of pre-trained machine learning models, wherein each of the machine learning models is trained for a particular input signal modality.” (Wang, Section IV.B., Paragraph 1, “With the development and wide application of deep learning technology, researchers have found that the deep learning algorithm is better than the traditional algorithm for lidar vehicle detection. Deep learning can automatically learn vehicle detection features from the point cloud, which can obtain more abundant features and improve vehicle detection accuracy. Besides, the lidar vehicle detection method based on deep learning adopts an end-to-end structure to integrate feature extraction and bounding box regression into one network, which will effectively improve the real-time performance of vehicle detection. According to the principle of algorithms, lidar vehicle detection methods based on deep learning can be divided into four categories, as shown in Fig. S 7 of the supplementary material.; Section III.B. “With the development of radar technology and the opening of frequency limitations, image-level radar has been gradually applied in the field of intelligent vehicles [90]. The imaging radar converts radar signals into images for output. There are four main types of radar image formats: radar projection maps, range-Doppler-azimuth maps, point cloud maps, and SAR maps, as shown in Fig. S 6 of the supplementary material. The reflection intensity map can be obtained by projecting the reflection intensity of the radar detection target onto the image. If we use the image method to express the radar range, Doppler, and azimuth, it is the radar range-Doppler-azimuth map. Radar detection information is converted into images, and vehicle targets can be detected using the supervised learning method. Common vehicle detection models include CNN [91], [92], FCN networks [93], and LSTM [94]. High-frequency radar can transform radar signals into point clouds, as shown in Fig. S 6(c). Vehicle detection methods based on point clouds can be divided into two categories: machine learning and deep learning.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the multiple pre-trained machine learning algorithms of Wang for the multiple input signal modalities of Khosla. The motivation for doing so would have been to enable automatic interpretation of the plurality of input signals. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla with the above teaching of Wang to fully disclose, “wherein the detection system comprises a plurality of pre-trained machine learning models, wherein each of the machine learning models is trained for a particular input signal modality.” Regarding claim 7, Khosla teaches “The system of claim 1,” While Khosla teaches the detection of a target vehicle based on non-standard features (see claim 1 rejection), Khosla does not disclose using a neural network for the feature detection. Wang discloses neural networks trained for vehicle feature detection (Wang, Section IV.B., Paragraph 1, “With the development and wide application of deep learning technology, researchers have found that the deep learning algorithm is better than the traditional algorithm for lidar vehicle detection. Deep learning can automatically learn vehicle detection features from the point cloud, which can obtain more abundant features and improve vehicle detection accuracy. Besides, the lidar vehicle detection method based on deep learning adopts an end-to-end structure to integrate feature extraction and bounding box regression into one network, which will effectively improve the real-time performance of vehicle detection. According to the principle of algorithms, lidar vehicle detection methods based on deep learning can be divided into four categories, as shown in Fig. S 7 of the supplementary material.; Section III.B. “With the development of radar technology and the opening of frequency limitations, image-level radar has been gradually applied in the field of intelligent vehicles [90]. The imaging radar converts radar signals into images for output. There are four main types of radar image formats: radar projection maps, range-Doppler-azimuth maps, point cloud maps, and SAR maps, as shown in Fig. S 6 of the supplementary material. The reflection intensity map can be obtained by projecting the reflection intensity of the radar detection target onto the image. If we use the image method to express the radar range, Doppler, and azimuth, it is the radar range-Doppler-azimuth map. Radar detection information is converted into images, and vehicle targets can be detected using the supervised learning method. Common vehicle detection models include CNN [91], [92], FCN networks [93], and LSTM [94]. High-frequency radar can transform radar signals into point clouds, as shown in Fig. S 6(c). Vehicle detection methods based on point clouds can be divided into two categories: machine learning and deep learning.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the neural network of Wang for the feature detection of Khosla. The motivation for doing so would have been to automate and expedite detection. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla with the above teaching of Wang to fully disclose, “wherein the detection system comprises a neural network trained to detect target vehicles based on non-standard features.” Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla in view of Lund (WO 2020214294 A1). Regarding claim 9, Khosla teaches “The system of claim 1,” While Khosla describes the use of ranging sensors (Khosla, Paragraph 74, “Referring also to FIG. 5 , a UE 500 includes a processor 510, a transceiver 520, and a memory 530 communicatively coupled to each other by a bus 540. The UE 500 may include one or more sensor(s) 570 (e.g., one or more cameras and/or one or more microphones and/or one or more ranging devices, and/or ultrasound sensors, etc.) connected to the bus 540. The UE 500 may include the components shown in FIG. 5 . The UE 500 is a wireless communication device and is part of a monitor vehicle (e.g., car, truck, motorcycle, etc.) that is capable of monitoring another vehicle (e.g., analyzing one or more images of a target vehicle to determine and report one or more characteristics of the target vehicle, and/or receiving and reporting one or more messages from another monitor vehicle regarding the target vehicle, etc.). The UE 500 may include one or more other components such as any of those shown in FIG. 2 such that the UE 200 may be an example of the UE 500. For example, the processor 510 may include one or more of the components of the processor 210. The transceiver 520 may include one or more of the components of the transceiver 215, e.g., the wireless transmitter 242 and the antenna 246, or the wireless receiver 244 and the antenna 246, or the wireless transmitter 242, the wireless receiver 244, and the antenna 246. Also or alternatively, the transceiver 520 may include the wired transmitter 252 and/or the wired receiver 254. The memory 530 may be configured similarly to the memory 211, e.g., including software with processor-readable instructions configured to cause the processor 510 to perform functions.”), Khosla does not expressly disclose “comprising sensor comprises a radio detection and ranging (RADAR) sensor.” Lund discloses “comprising sensor comprises a radio detection and ranging (RADAR) sensor.” (Lund, Paragraph 3, “Autonomous driving systems (ADS) may be fully autonomous or partially autonomous. Partially autonomous driving systems include advanced driver-assistance systems (ADAS). ADS based vehicles, which are becoming increasingly prevalent, may use sensors to determine the presence of nearby vehicles. For example, an ego vehicle may use ranging sensors like radar (Radio Detection and Ranging) or lidar (Light Detection and Ranging) input to detect nearby vehicles. Radar refers to the use of radio waves to determine the position and/or velocity of objects. Lidar refers to remote sensing technology that measures distance by illuminating a target (e.g. with a laser or other light) and analyzing the reflected light. Conventionally, ranging sensors coupled to an ADS may detect and/or indicate the presence of proximate vehicles based on sensory input. In conventional systems, the detected vehicles may be displayed as moving rectangles, blobs, or segmented objects, and drivers may find it difficult to correlate displayed information with vehicles seen on the roadway thereby limiting its utility. Therefore, techniques to provide meaningful and actionable vehicle information are desirable.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the RADAR of Lund as the one or more ranging devices of Khosla. The motivation for doing so would have been to increase sensing capabilities. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla with the above teaching of Lund to fully disclose the invention of claim 9. Regarding claim 10, Khosla teaches “The system of claim 1,” While Khosla describes the use of ranging sensors (Khosla, Paragraph 74, “Referring also to FIG. 5 , a UE 500 includes a processor 510, a transceiver 520, and a memory 530 communicatively coupled to each other by a bus 540. The UE 500 may include one or more sensor(s) 570 (e.g., one or more cameras and/or one or more microphones and/or one or more ranging devices, and/or ultrasound sensors, etc.) connected to the bus 540. The UE 500 may include the components shown in FIG. 5 . The UE 500 is a wireless communication device and is part of a monitor vehicle (e.g., car, truck, motorcycle, etc.) that is capable of monitoring another vehicle (e.g., analyzing one or more images of a target vehicle to determine and report one or more characteristics of the target vehicle, and/or receiving and reporting one or more messages from another monitor vehicle regarding the target vehicle, etc.). The UE 500 may include one or more other components such as any of those shown in FIG. 2 such that the UE 200 may be an example of the UE 500. For example, the processor 510 may include one or more of the components of the processor 210. The transceiver 520 may include one or more of the components of the transceiver 215, e.g., the wireless transmitter 242 and the antenna 246, or the wireless receiver 244 and the antenna 246, or the wireless transmitter 242, the wireless receiver 244, and the antenna 246. Also or alternatively, the transceiver 520 may include the wired transmitter 252 and/or the wired receiver 254. The memory 530 may be configured similarly to the memory 211, e.g., including software with processor-readable instructions configured to cause the processor 510 to perform functions.”), Khosla does not expressly disclose “wherein the sensor comprises a light detection and ranging (LiDAR) sensor.” Lund discloses “wherein the sensor comprises a light detection and ranging (LiDAR) sensor.” (Lund, Paragraph 3, “Autonomous driving systems (ADS) may be fully autonomous or partially autonomous. Partially autonomous driving systems include advanced driver-assistance systems (ADAS). ADS based vehicles, which are becoming increasingly prevalent, may use sensors to determine the presence of nearby vehicles. For example, an ego vehicle may use ranging sensors like radar (Radio Detection and Ranging) or lidar (Light Detection and Ranging) input to detect nearby vehicles. Radar refers to the use of radio waves to determine the position and/or velocity of objects. Lidar refers to remote sensing technology that measures distance by illuminating a target (e.g. with a laser or other light) and analyzing the reflected light. Conventionally, ranging sensors coupled to an ADS may detect and/or indicate the presence of proximate vehicles based on sensory input. In conventional systems, the detected vehicles may be displayed as moving rectangles, blobs, or segmented objects, and drivers may find it difficult to correlate displayed information with vehicles seen on the roadway thereby limiting its utility. Therefore, techniques to provide meaningful and actionable vehicle information are desirable.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the LIDAR of Lund as the one or more ranging devices of Khosla. The motivation for doing so would have been to increase sensing capabilities. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla with the above teaching of Lund to fully disclose the invention of claim 10. Claim(s) 13 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla. Regarding claim 13, Khosla teaches “The method of claim 11,” While a first embodiment of Khosla discloses a vehicle processor for image receiving and searching (see claim 1 rejection), this embodiment does not expressly disclose the use of a cloud processor for the image receiving and searching. However, in a second embodiment, Khosla does disclose the use of a cloud processor for performing monitoring (Khosla, Embodiment 2, Paragraph 97, “Monitoring, e.g., rideshare monitoring, may be initiated at stage 810 by one or more of a variety of entities, e.g., a driver of the rideshare vehicle (at sub-stage 813), autonomously as a service provided by a rideshare company (e.g., by one or more of the monitor-enabling/activation messages 815-818), a rider using a mobile device to capture an image of the rideshare vehicle, including license plate, and uploading the image to the server 400, and/or one or more other actions. For example, a requester of the rideshare service (e.g., a rider or a person arranging the ride for the rider (e.g., a guardian, a parent, a friend, etc.)) may request a rideshare using a rideshare application on a mobile device. A mobile device (here the monitor device 801) of a rider may correlate a dynamic location of the mobile device (e.g., determined using a GNSS) with a dynamic location provided by the rideshare application to determine that the mobile device is in the rideshare vehicle. If the rideshare vehicle location and the mobile device location deviate by more than a threshold distance (e.g., 100 m) before termination of the rideshare, then the mobile device may provide an alert, e.g., to the server 400, to the rideshare service, to other vehicles (e.g., through an SDSM), etc. The mobile device of the rider may provide a vehicle report, e.g., the vehicle report 838) indicating the location of the mobile device and identifying the rideshare vehicle (e.g., vehicle ID, vehicle make/model/color, license plate number, etc.). At sub-stage 813 the target vehicle 730 may provide information to the server 400 (e.g., to the cloud) to initiate the monitoring, e.g., with the vehicle report 837 including a vehicle ID (e.g., vehicle make/model/color, temporary ID from a BSM, and/or license plate number).” It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the cloud processor of Khosla in embodiment 2, for performing the image receiving and searching of Khosla in embodiment 1. The motivation for doing so would have been to increase scalability and offload processing from local vehicles. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to use the cloud processor of embodiment 2 of Khosla for the image receiving and searching of embodiment 1 of Khosla to fully disclose, “wherein the image is received at a cloud service and the search is executed via the cloud service.” Regarding claim 18, claim 18 recites a system with elements corresponding to the steps recited in Claim 13. Therefore, the recited elements of this claim are mapped to the analogous steps in the corresponding method claim. The rationale and motivation to combine the two embodiments of Khosla apply to this claim. Additionally, the cloud server disclosed in the rejection of claim 13 amounts to the system/cloud computing processor of claim 18. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla in view of Wang further in view of Gage. Regarding claim 15, Khosla teaches “The method of claim 11,” While Khosla teaches executing an image-based search in response to a detection request, (Khosla, Paragraph 20, “For example, an ADAS (Advanced Driver Assistance System) unit of a vehicle may receive camera feeds from various cameras, e.g., front-facing, rear-facing, and side-facing cameras. A monitor vehicle (e.g., the ADAS, and/or a separate processor of the vehicle) may analyze one or more images from the camera(s) to decipher text, e.g., on road signs, and/or to determine one or more other identifying characteristics of another vehicle. The monitor vehicle may be configured to analyze one or more images to decipher text (and possibly images) of license plates. The monitor vehicle may be configured to determine the identifying characteristic(s) and to match/correlate the identifying characteristic(s) with a target vehicle in response to receiving a trigger to do so, e.g., a monitor alert issued by a service provider, a government agency, etc. for the target vehicle.”), Khosla does not expressly disclose using a machine learning model for the search, or that the detection request comprises an image. Wang discloses machine learning for feature detection in images of vehicles (Wang, Section IV.B., Paragraph 1, “With the development and wide application of deep learning technology, researchers have found that the deep learning algorithm is better than the traditional algorithm for lidar vehicle detection. Deep learning can automatically learn vehicle detection features from the point cloud, which can obtain more abundant features and improve vehicle detection accuracy. Besides, the lidar vehicle detection method based on deep learning adopts an end-to-end structure to integrate feature extraction and bounding box regression into one network, which will effectively improve the real-time performance of vehicle detection. According to the principle of algorithms, lidar vehicle detection methods based on deep learning can be divided into four categories, as shown in Fig. S 7 of the supplementary material.; Section III.B. “With the development of radar technology and the opening of frequency limitations, image-level radar has been gradually applied in the field of intelligent vehicles [90]. The imaging radar converts radar signals into images for output. There are four main types of radar image formats: radar projection maps, range-Doppler-azimuth maps, point cloud maps, and SAR maps, as shown in Fig. S 6 of the supplementary material. The reflection intensity map can be obtained by projecting the reflection intensity of the radar detection target onto the image. If we use the image method to express the radar range, Doppler, and azimuth, it is the radar range-Doppler-azimuth map. Radar detection information is converted into images, and vehicle targets can be detected using the supervised learning method. Common vehicle detection models include CNN [91], [92], FCN networks [93], and LSTM [94]. High-frequency radar can transform radar signals into point clouds, as shown in Fig. S 6(c). Vehicle detection methods based on point clouds can be divided into two categories: machine learning and deep learning.”) Gage discloses transmission of an image in a vehicle security alert (Gage, Paragraphs 4 and 52, “An example of a surveillance system that actively classifies security risks and responds proportionally to the security risks is presented herein. In one embodiment, the surveillance system uses a plurality of sensors to detect surveillance events in a surrounding environment of the vehicle and to assess a threat type and a threat level associated with the surveillance events. Accordingly, the surveillance system can actively monitor inputs from the sensors to detect surveillance events that correlate with risks to the vehicle instead of blindly initiating an alarm whenever a sensor is activated.”; “In general, the responses include, for example, sounding an audible alarm, providing an image or video feed to a remote device (e.g., a smartphone or remote monitoring center), providing a user interface (e.g., web-based interface) to a remote device that is a query for additional inputs, logging data about the surveillance event, transmitting a communication to authorities (e.g., police) to alert the authorities, transmitting a beacon to nearby devices connected via the ad-hoc wireless network about the event, autonomously operating the vehicle 100 to drive away or to a pre-programmed location, and so on.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the machine learning of Wang for the image-based feature search of Khosla, and incorporate an image, as taught by Gage, as part of the detection request of Khosla. The motivation for incorporating the teaching of Wang would have been to automate and improve accuracy. The motivation for incorporating the teaching of Gage would have been to increase the amount of information sent to the vehicle to assist in identifying the vehicle of interest. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khosla
Read full office action

Prosecution Timeline

Sep 25, 2023
Application Filed
Nov 20, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592054
LOW-LIGHT VIDEO PROCESSING METHOD, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586245
ROBUST LIDAR-TO-CAMERA SENSOR ALIGNMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12566954
SOLVING MULTIPLE TASKS SIMULTANEOUSLY USING CAPSULE NEURAL NETWORKS
2y 5m to grant Granted Mar 03, 2026
Patent 12555394
IMAGE PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM FOR GENERATING DATA BASED ON A CAPTURED IMAGE
2y 5m to grant Granted Feb 17, 2026
Patent 12547658
RETRIEVING DIGITAL IMAGES IN RESPONSE TO SEARCH QUERIES FOR SEARCH-DRIVEN IMAGE EDITING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+50.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month