Prosecution Insights
Last updated: April 19, 2026
Application No. 18/563,942

OBJECT DETECTION APPARATUS AND OBJECT DETECTION METHOD

Non-Final OA §103
Filed
Nov 24, 2023
Examiner
ROBERTS, RACHEL L
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Mitsubishi Electric Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+27.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT JP2021/021004. Priority to Japan with a priority date of 06/02/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 11/24/2023 and 05/10/2024 have been considered and placed in the application file. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5 are rejected under 35 U.S.C. 103 as unpatentable over Kazumi et al. (JP 2009186260 A (using translation from espace.net, figures translated using google image translator hereafter referred to as Kazumi)) in view of Sengupta et al (Sengupta, Arindam, Feng Jin, and Siyang Cao. "A DNN-LSTM based target tracking approach using mmWave radar and camera sensor fusion." 2019 IEEE national aerospace and electronics conference (NAECON). IEEE, 2019). Regarding Claim 1, Kazumi teaches an object detection apparatus (Kazumi ¶0001, ¶0010, Fig 1 100, ¶0085 disclose an object detection apparatus) and comprising: a radar (Kazumi ¶0006, ¶0010, ¶0015, Fig 2, 20 disclose a radar) to emit an electromagnetic wave toward an object (Kazumi ¶0015 discloses a radar distance measuring device that measure distance to the target using an electromagnetic wave) and receive a reflected signal from the object (Kazumi ¶0015 discloses the electromagnetic wave being directed toward the target object and the wave being reflected and the signal received and analyzed); a signal processor to compute (Kazumi ¶0010, ¶0018, ¶0019 discloses a control unit that consists of combining operation circuits that performs calculations) radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) on a basis of the reflected signal (Kazumi ¶0015 discloses the electromagnetic wave being directed toward the target object and the wave being reflected and the signal received and analyzed), the radar position data indicating a position of the object (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position); a camera (Kazumi ¶0002, ¶0010, Fig 1, 10 disclose a camera) to obtain object image information (Kazumi ¶0002, ¶0021, discloses an object being extracted based on information of an image output from a camera) by capturing an image of the object (Kazumi ¶0010 discloses the camera capturing the surroundings of the vehicle); an image processor (Kazumi ¶0085, ¶0099 discloses an image processing apparatus and procedure) to compute camera velocity data (Kazumi ¶0088 discloses calculating the speed of the object based on the image) on a basis of the object image information (Kazumi ¶0002, ¶0021 discloses an object being extracted based on information of an image output from a camera), the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image) indicating a velocity of the object (Kazumi ¶0088 discloses calculating the speed of the object based on the image); and to output, to an external device (Kazumi ¶0009, Fig 1 300, discloses an external output device), the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) of a first frame (Kazumi ¶0024-¶0026 discloses the frame being identified by the speed and direction of the pixel) as first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) detected position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) and first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) the first detected position data indicating the position of the object for the first frame (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel), the first detected velocity data indicating the velocity of the object for the first frame (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel), wherein includes a data store to store (Kazumi ¶0024 discloses moving speed and direction are stored in association with the frame identifier) the first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) detected position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position), and when the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) of a second frame following the first frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) on a basis of the first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) detected position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) and the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image) obtained for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) to the external device (Kazumi ¶0009, Fig 1 300, discloses an external output device). Kazumi does not explicitly disclose and radar velocity data, the radar velocity data indicating a velocity of the object, a fusion processor, and the radar velocity data, detected velocity data, the fusion processor, and the radar velocity data, are lost, the fusion processor generates second detected position data indicating the position of the object and second detected velocity data indicating the velocity of the object and outputs the second detected position data and the second detected velocity data. Sengupta is in the same field of automated object detection for vehicles. Further, Sengupta teaches to and radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) the radar velocity data indicating a velocity of the object (Sengupta Pg 2 Col 1 ¶04- ¶05 discloses determining velocity based off of the radar signal); a fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) and the radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) detected velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal), the fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) and the radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) are lost (Sengupta Pg 3 Col 1 ¶03 discloses continuously making detection even after one sensor fails), the fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) generates second detected position data (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the target trajectory based on previous frames) indicating the position of the object (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) and second detected velocity data (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object) indicating the velocity of the object (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object) and outputs the second detected position data (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) and the second detected velocity data (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kazumi by including the use of radar velocity data to be used in the fusion of the radar and camera data to generate a second detected position data and second velocity data as taught by Sengupta to make the invention that can predict the movement and speed of the target object based on previously known data from the radar and camera; thus one of ordinary skilled in the art would be motivated to combine the references since there has been interest to fuse data from these two sources to obtain an accurate position of a target, and subsequently track its trajectory. (Sengupta Pg 1 Introduction). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Kazumi in view of Sengupta teaches the object detection apparatus (Kazumi ¶0001, ¶0010, Fig 1 100, ¶0085 disclose an object detection apparatus) according to claim 1, wherein the image processor (Kazumi ¶0085, ¶0099 discloses an image processing apparatus and procedure) computes direction data (Kazumi ¶0024-¶0025, ¶0031 discloses the movement information calculation unit calculating the moving speed and moving direction of the object) on the basis of the object image information (Kazumi ¶0002, ¶0021 discloses an object being extracted based on information of an image output from a camera), the direction data indicating a direction where the object is located (Kazumi ¶0024-¶0025, ¶0031 discloses the movement information calculation unit calculating the moving speed and moving direction of the object and ¶0083 ad ¶0100 disclosing where the target object is located), the fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) includes a sameness ascertainment circuitry (Kazumi ¶0092 discloses a same object determination unit) to perform object sameness determination (Kazumi ¶0092 discloses a same object determination unit to determine if the object detected is the same reference object candidate) of whether the radar (Kazumi ¶0006, ¶0010, ¶0015, Fig 2, 20 disclose a radar) and the camera(Kazumi ¶0002, ¶0021, discloses an object being extracted based on information of an image output from a camera) have detected the same object (Kazumi ¶0105 discloses determining if the reference object is the same), and the sameness ascertainment circuitry performs the object sameness determination (Kazumi ¶0092 discloses a same object determination unit) on the basis of (Kazumi ¶0106 discloses the variables used to determine if the object is the same) the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) , the radar velocity data (Sengupta Pg 2 Col 1 ¶04- ¶05 discloses determining velocity based off of the radar signal), the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image), and the direction data (Kazumi ¶0024-¶0025, ¶0031 discloses the movement information calculation unit calculating the moving speed and moving direction of the object). See rationale for Claim 1(its parent claim). Regarding Claim 3, Kazumi teaches an object detection apparatus (Kazumi ¶0001, ¶0010, Fig 1 100, ¶0085 disclose an object detection apparatus) comprising: a radar (Kazumi ¶0006, ¶0010, ¶0015, Fig 2, 20 disclose a radar) to emit an electromagnetic wave toward an object (Kazumi ¶0015 discloses a radar distance measuring device that measure distance to the target using an electromagnetic wave) and receive a reflected signal from the object (Kazumi ¶0015 discloses the electromagnetic wave being directed toward the target object and the wave being reflected and the signal received and analyzed); a signal processor to compute (Kazumi ¶0010, ¶0018, ¶0019 discloses a control unit that consists of combining operation circuits that performs calculations) radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) on a basis of the reflected signal (Kazumi ¶0015 discloses the electromagnetic wave being directed toward the target object and the wave being reflected and the signal received and analyzed), the radar position data indicating a position of the object (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position), a camera (Kazumi ¶0002, ¶0010, Fig 1, 10 disclose a camera) to obtain object image information (Kazumi ¶0002, ¶0021, discloses an object being extracted based on information of an image output from a camera) by capturing an image of the object (Kazumi ¶0010 discloses the camera capturing the surroundings of the vehicle); an image processor (Kazumi ¶0085, ¶0099 discloses an image processing apparatus and procedure) to compute camera velocity data (Kazumi ¶0088 discloses calculating the speed of the object based on the image) and camera position data (Kazumi ¶0017-¶0018 discloses the determination of the relative positional relationship between the target object and the reference object based on the captured image of the camera) on a basis of the object image information (Kazumi ¶0002, ¶0021 discloses an object being extracted based on information of an image output from a camera), the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image) indicating a velocity of the object (Kazumi ¶0088 discloses calculating the speed of the object based on the image), the camera position data (Kazumi ¶0017-¶0018 discloses the determination of the relative positional relationship between the target object and the reference object based on the captured image of the camera) indicating a position of the object (Kazumi ¶0018-¶0019 discloses the captured image from the camera being used to determine the position of the target object in relation to the vehicle); to output, to an external device (Kazumi ¶0009, Fig 1 300, discloses an external output device), the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) of a first frame (Kazumi ¶0024-¶0026 discloses the frame being identified by the speed and direction of the pixel) as first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) detected position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) and first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels), the first detected position data indicating the position of the object for the first frame (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel), the first detected velocity data indicating the velocity of the object for the first frame(Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel), wherein when the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) following the first frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) outputs, to the external device (Kazumi ¶0009, Fig 1 300, discloses an external output device), the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image) and the camera position data(Kazumi ¶0017-¶0018 discloses the determination of the relative positional relationship between the target object and the reference object based on the captured image of the camera) of the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1). Kazumi does not explicitly disclose and radar velocity data the radar velocity data indicating a velocity of the object; a fusion processor and the radar velocity data detected velocity data, and the radar velocity data are lost, the fusion processor as second detected position data and second detected velocity data, the second detected position data indicating the position of the object the second detected velocity data indicating the velocity of the object. Sengupta is in the same field of automated object detection for vehicles. Further, Sengupta teaches to and radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) the radar velocity data indicating a velocity of the object (Sengupta Pg 2 Col 1 ¶04- ¶05 discloses determining velocity based off of the radar signal); a fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) and the radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) detected velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal), and the radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) are lost (Sengupta Pg 3 Col 1 ¶03 discloses continuously making detection even after one sensor fails), the fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) as second detected position data (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) and second detected velocity data (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object), the second detected position data (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) indicating the position of the object (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) the second detected velocity data (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object) indicating the velocity of the object (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kazumi by including the use of radar velocity data to be used in the fusion of the radar and camera data to generate a second detected position data and velocity data as taught by Sengupta to make the invention that can predict the movement and speed of the target object based on known data from the radar and camera; thus one of ordinary skilled in the art would be motivated to combine the references since there has been interest to fuse data from these two sources to obtain an accurate position of a target, and subsequently track its trajectory. (Sengupta Pg 1 Introduction). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 4, Kazumi in view of Sengupta teaches the object detection apparatus (Kazumi ¶0001, ¶0010, Fig 1 100, ¶0085 disclose an object detection apparatus) according to claim 3, wherein the fusion processor (Sengupta Fig 2, Pg 3 Col 1 ¶02 discloses the radar and camera data being fed to the box where the data is fused) includes a sameness ascertainment circuitry (Kazumi ¶0092 discloses a same object determination unit) to perform object sameness determination (Kazumi ¶0092 discloses a same object determination unit to determine if the object detected is the same reference object candidate) of whether the radar (Kazumi ¶0006, ¶0010, ¶0015, Fig 2, 20 disclose a radar) and the camera(Kazumi ¶0002, ¶0021, discloses an object being extracted based on information of an image output from a camera) have detected the same object (Kazumi ¶0105 discloses determining if the reference object is the same), and the sameness ascertainment circuitry performs the object sameness determination (Kazumi ¶0092 discloses a same object determination unit) on the basis of (Kazumi ¶0106 discloses the variables used to determine if the object is the same) the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) , the radar velocity data (Sengupta Pg 2 Col 1 ¶04- ¶05 discloses determining velocity based off of the radar signal), the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image), and the camera position data(Kazumi ¶0017-¶0018 discloses the determination of the relative positional relationship between the target object and the reference object based on the captured image of the camera). See rationale for Claim 1(its parent claim). Regarding Claim 5, Kazumi teaches an object detection method (Kazumi ¶0001, ¶0018, ¶0028 discloses a method for object detection and distance measuring) comprising: computing radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) on a basis of a reflected signal from an object (Kazumi ¶0015 discloses the electromagnetic wave being directed toward the target object and the wave being reflected and the signal received and analyzed), the radar position data indicating a position of the object (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position), computing camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image) on a basis of the object image information (Kazumi ¶0002, ¶0021, discloses an object being extracted based on information of an image output from a camera), the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image) indicating a velocity of the object (Kazumi ¶0088 discloses calculating the speed of the object based on the image); and outputting to an external device (Kazumi ¶0009, Fig 1 300, discloses an external output device), the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) of a first frame (Kazumi ¶0024-¶0026 discloses the frame being identified by the speed and direction of the pixel) as first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) detected position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) and first (Kazumi ¶0026 discloses how the first count for movement and speed is determined and increased upon in subsequent pixels) the first detected position data indicating the position of the object for the first frame (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel), the first detected velocity data indicating the velocity of the object for the first frame(Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel, wherein the method (Kazumi ¶0001, ¶0018, ¶0028 discloses a method for object detection and distance measuring) comprises storing the first detected position data (Kazumi ¶0024 discloses moving speed and direction are stored in association with the frame identifier) in outputting(Kazumi ¶0009, Fig 1 300, discloses an external output device) the first detected position data (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel) and the first detected velocity data (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel) to the external (Kazumi ¶0009, Fig 1 300, discloses an external output device), and when the radar position data (Kazumi ¶0010, ¶0017, ¶0018, ¶0019 discloses the radar distance measuring device used to calculate relative position) of a second frame following the first frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) on a basis of the first detected position data (Kazumi ¶0026 discloses how the first count for movement and speed is determined and associated with the first pixel) and the camera velocity data (Kazumi ¶0087 discloses calculating edge movement information from the image)obtained for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1), for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) for the second frame (Kazumi ¶0026 discloses the subsequent frames after the first frame being n+1) to the external device (Kazumi ¶0009, Fig 1 300, discloses an external output device). Kazumi does not explicitly disclose and radar velocity data, the radar velocity data indicating a velocity of the object, and the radar velocity data detected velocity data, and the radar velocity data, are lost second detected position data indicating the position of the object, and second detected velocity data indicating the velocity of the object and outputting the second detected position data and the second detected velocity data. Sengupta is in the same field of automated object detection for vehicles. Further, Sengupta teaches to and radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal), the radar velocity data indicating a velocity of the object (Sengupta Pg 2 Col 1 ¶04- ¶05 discloses determining velocity based off of the radar signal), and the radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) detected velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal), and the radar velocity data (Sengupta Pg 2 Col 1 ¶04-¶05 discloses determining velocity based off of the radar signal) are lost (Sengupta Pg 3 Col 1 ¶03 discloses continuously making detection even after one sensor fails) second detected position data (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the target trajectory based on previous frames) indicating the position of the object (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) and outputting the second detected position data (Sengupta Pg 6 Col 1 ¶03 and Pg 2 Col 1 ¶02 discloses generating the position of the target trajectory based on previous frames) and the second detected velocity data (Sengupta Pg 2 Col 1 ¶04 discloses solving for the velocity of the object). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kazumi by including the use of radar velocity data to be used in the fusion of the radar and camera data to generate a second detected position data and velocity data as taught by Sengupta to make the invention that can predict the movement and speed of the target object based on known data from the radar and camera; thus one of ordinary skilled in the art would be motivated to combine the references since there has been interest to fuse data from these two sources to obtain an accurate position of a target, and subsequently track its trajectory. (Sengupta Pg 1 Introduction). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. JP Patent Publication JP-2008021069-A to Kenjiro et al. discloses a technique for detecting an object present around a car such as another vehicle. US Patent Publication US-20210089040-A1 to Ebrahimi Afrouzi et al. discloses obstacle recognition methods for autonomous robots. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL LYNN ROBERTS whose telephone number is (571)272-6413. The examiner can normally be reached Monday- Friday 7:30am- 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mistry Oneal can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL L ROBERTS/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Nov 24, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581132
LARGE-SCALE POINT CLOUD-ORIENTED TWO-DIMENSIONAL REGULARIZED PLANAR PROJECTION AND ENCODING AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569208
PET APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564324
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING SYSTEM FOR ABNORMALITY DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561773
METHOD AND APPARATUS FOR PROCESSING IMAGE, ELECTRONIC DEVICE, CHIP AND MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12525028
CONTACT OBJECT DETECTION APPARATUS AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month