DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed 12/30/2025 has been entered. Claims 1-20 remain pending in the application.
The objections to claims 7, 9, 14, 16-18 have been overcome by the amendment.
The rejection of claim 18 under 35 U.S.C. 112(b) has been overcome by the amendment.
The objection of claim 7 under 35 U.S.C. 112(f) has been overcome by the amendment.
Response to Arguments
Applicant’s arguments filed 12/30/2025 have been fully considered.
Regarding Applicant’s argument (REMARKS pages 8-9 of 10) about the rejections of claims 1-6, 14-20 under 35 U.S.C. 101, the rejections have been overcome by the amendment.
Regarding Applicant’s argument (REMARKS page 9 of 10) about “Applicant respectfully submits that the cited references, alone or in combination, fail to disclose every element of the claims as currently recited. For example, independent claims 1, 7, and 14 each recite elements related to reconstructing RADAR data based at least on velocity estimated by an ML model”, Examiner disagrees because Wang (‘410) does disclose the claimed language “reconstructing, as expected RADAR data, RADAR data corresponding to” “the object based at least on one or more reverse calculations performed with respect to the estimated velocity” {Fig.6 item 634 (associator), 630 (track predictor), 512 (track manager), 639; col.19 lines 18-24 (the associated data 635 may be generated by the associator 634 based on at least one of (1) up-to-date radar measurement data ( e.g., the radar points 631 of up-to-date measurement data) or (2) a predicted state associated with the object based on a prior state of a track ( e.g., the predicted state data 659(of the predicted track of the object based) t2 on the track 655(t1)).); col.22 lines 37-38 (the track predictor 630 of the radar tracker 512 as feedback); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); col.25 lines 14-17 (track predictor 630 may generate predicted track data of the existing track (e.g., predicted state data 659) as indicated by the predicted bounding box 710 in FIG. 7.); Examiner’s note: “generate predicted track data” in col.25 lines 14-15 for claimed language “reconstructing, as expected RADAR data, RADAR data”. Fig.6 items 512, 630, and 639 for “based at least on one or more reverse calculations performed with respect to the estimated velocity” (see input of item 639 in Fig.6).}. A person of ordinary skill in the art before the effective filing date of the claimed invention knows that “RADAR data” includes positions and velocities of detected objects. The claimed language does not exclude positions and velocities of detected objects in “RADAR data”. As discussed during the Interview, the “RADAR data” in the invention appears not including positions and velocities of detected objects. However, the claimed language “RADAR data” does not distinct the difference of the invention.
Claim Objections
Claim 3 objected to because of typographical error: “on” in line 2. It appears that it should not be there. Appropriate correction is required.
Claim 9 objected to because of the following informalities: 1) “measured RADAR data” in line 3. It appears that it is the same as the “the RADAR data captured” mentioned in claim 7 line 3. 2) “the reconstructing of RADAR data” in line 4. It appears that “the” is missing before “RADAR data”. 3) “velocity” in line 5. It appears that “;” is missing. 4) “the expected RADAR” in line 6. It appears that it should be “the expected RADAR data”. Appropriate corrections are required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7-13, 16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites the limitation “reconstructing RADAR data” in line 7. It is indefinite because it is not clear whether or not the “RADAR data” in the “reconstructing RADAR data” in line 7 is the same as the “RADAR data” mentioned in line 3. Because the claim is indefinite and cannot be properly construed, for purposes of examination, this limitation is being interpreted as “reconstructing the RADAR data”. Appropriate clarification is required.
Claims 8-13 are also rejected by virtue of their dependency on claim 7 because each of dependent claims 8-13 is unclear, at least, in that it depends on unclear independent claim 7.
Claim 9 recites the limitations: 1) "an estimated velocity" in line 3. It is indefinite because: i) as mentioned in claim 7 lines 5-6 “estimating, using a machine learning {ML} model, a velocity corresponding to the object based at least on the RADAR data”, it is not clear whether or not the "an estimated velocity" in line 3 is the same as the “a velocity” mentioned in claim 7 line 5 as well as “the velocity as estimated” mentioned in claim 7 line 9. ii) it is not clear whether or not the "an estimated velocity" in line 3 corresponds to the “an object” mentioned in claim 7 line 3. Because the claim is indefinite and cannot be properly construed, for purposes of examination, this limitation is being interpreted as “the velocity as estimated” mentioned in claim 7 line 9. 2) “measured RADAR data” in line 3. It is indefinite because as mentioned in claim 7 line 3 “obtaining RADAR data corresponding to an object, the RADAR data captured” and claim 7 lines 5-6 “estimating, using a machine learning {ML} model, a velocity corresponding to the object based at least on the RADAR data”, it is not clear whether or not the “measured RADAR data” in line 3 is the same as “the RADAR data captured” mentioned in claim 7 line 3. Because the claim is indefinite and cannot be properly construed, for purposes of examination, this limitation is being interpreted as “the RADAR data”. 3) “the estimated velocity” in line 5. It is indefinite because it is not clear which one of “an estimated velocity” in line 3, “estimating, using a machine learning {ML} model, a velocity corresponding to the object” in claim 7 lines 5-6, and “the velocity as estimated” in claim 7 line 9 “the estimated velocity” in line 5 represents. Because the claim is indefinite and cannot be properly construed, for purposes of examination, this limitation is being interpreted as “the velocity as estimated” in claim 7 line 9. Appropriate clarifications are required.
Claims 10-11 are also rejected by virtue of their dependency on claim 9 because each of dependent claims 10-11 is unclear, at least, in that it depends on unclear claim 9.
Claim 16 recites the limitation “the processor” in line 4. It is indefinite because it is not clear which one in “the one or more processors” “the processor” represents if “more processors” are operated. Because the claim is indefinite and cannot be properly construed, for purposes of examination, this limitation is being interpreted as “the one or more processors”. Appropriate clarification is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 7, 9, 11, 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al. (US 10,976,410, hereafter Wang).
Regarding claim 7, Wang (‘410) discloses that A system {Fig.1} comprising:
one or more processors to perform operations {Fig.1 } comprising:
obtaining RADAR data corresponding to an object {Fig.5 item 134 (radar), 530 (objects)}, the RADAR data captured within a time range { Fig.12 item 1204 (Receiving, from one or more radar sensors, radar measurement data associated with the object in a second state at a second time which is later than the first time); col.25 line 10 (radar frame)} and corresponding to a plurality of RADAR sensors { col.17 lines 1-2 (the radars receive measurements from the same target)};
estimating, using a machine learning (ML) model, a velocity corresponding to the object based at least on the RADAR data {Fig.6 items 517 (radar tracker), 602, 632 (machine learning model); Fig.12 item 1204 (Receiving, from one or more radar sensors, radar measurement data associated with the object in a second state at a second time which is later than the first time), 1208 (Applying the first information and the radar measurement data as training input to the machine learning model to generate a first predicted output of the machine learning model); col.14 lines 28-29 (the detector 550 can determine a size, a shape, a velocity, or a moving direction of an object), 53-55 (the radar tracker 517 determines tracks of different objects ( e.g., present position and velocity of different objects)); col.20 lines 54-56 (the learned machine learning model 602 may include a plurality of learned machine learning (ML) models 632.); Examiner’s note: “predicted output” for “estimated”} corresponding to the plurality of RADAR sensors {col.17 lines 1-2 (the radars receive measurements from the same target)}, wherein the ML model was trained based at least on reconstructing RADAR data from one or more velocities estimated by the ML model { Fig.6 item 634 (associator), 630 (track predictor), 602 (machine learning model), 512 (track manager), 639; col.19 lines 18-24 (the associated data 635 may be generated by the associator 634 based on at least one of (1) up-to-date radar measurement data ( e.g., the radar points 631 of up-to-date measurement data) or (2) a predicted state associated with the object based on a prior state of a track ( e.g., the predicted state data 659(of the predicted track of the object based) t2 on the track 655(t1)).); col.22 lines 37-38 (the track predictor 630 of the radar tracker 512 as feedback); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); col.25 lines 14-17 (track predictor 630 may generate predicted track data of the existing track (e.g., predicted state data 659) as indicated by the predicted bounding box 710 in FIG. 7.), 30-31 (the machine learning model 602 (see FIG. 6) may be trained by the training engine 214), 33-36 (input training data may be radar measurement data from radar sensors (e.g., the feature data 637 in FIG. 6) and prior knowledge about a target object (e.g., the predicted state data 659 in FIG. 6).), 58-62 (the training engine 214 may use a loss function to evaluate how well the machine learning model 602 fits the given training samples so that the weights of the model can be updated to reduce the loss on the next evaluation.); Examiner’s note: Fig.6 item 659 and “generate predicted track data” in col.25 lines 14-15 for claimed language “reconstructing RADAR data”. col.24 lines 19-20 and Fig.6 items 512, 630, 639, and 659 for “based at least on reconstructing RADAR data from one or more velocities estimated by the ML model” (see input of item 639 in Fig.6) };
associating the velocity as estimated with the object detected using one or more object tracking techniques {Fig.6 item 634 (associator), 630 (track predictor); Fig.11A-B for “using one or more object tracking techniques”; col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); col.25 lines 14-17 (track predictor 630 may generate predicted track data of the existing track (e.g., predicted state data 659) as indicated by the predicted bounding box 710 in FIG. 7.) }.
Regarding claim 9, which depends on claim 7, Wang (‘410) discloses that in the system, the ML model was trained {col.25 lines 30-31 (the machine learning model 602 (see FIG. 6) may be trained by the training engine 214)} at least by:
determining an estimated velocity based at least on measured RADAR data {Fig.6 items 134 (radar), 631 (radar points), 639 (track data); col.15 lines 15-17 (The radar tracker 517 receives sensor data from the radar sensor 134.The sensor data may include radar points 631); col.16 lines 1-2 (the machine learning model 602, to generate summary points (e.g., track data 639); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object)};
determining expected RADAR data {Fig.6 item 659; col.15 line 21 (predicted state data 659)} based at least on the reconstructing of RADAR data using the estimated velocity { Fig.6 items 659 (predicted state data), 630 (track predictor), 639 (track data); col.16 lines 1-2 (the machine learning model 602, to generate summary points (e.g., track data 639); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); col.25 lines 14-17 (track predictor 630 may generate predicted track data of the existing track (e.g., predicted state data 659); };
determining an error based at least on a difference between the expected RADAR and the measured RADAR data { Fig.12 item 1210 (Updating one or more weights in the machine learning model by determining relevance between the first predicted output and the label of the radar measurement data); col.25 lines 58-62 (the training engine 214 may use a loss function to evaluate how well the machine learning model 602 fits the given training samples so that the weights of the model can be updated to reduce the loss on the next evaluation.); col.34 lines 42-43 (using a loss function to evaluate the relevance); Examiner’s note: “using a loss function to evaluate the relevance” for “error” and “difference”. }; and
updating one or more parameters of the ML model based at least on the error {Fig.12 item 1210 (Updating one or more weights in the machine learning model by determining relevance between the first predicted output and the label of the radar measurement data); col.25 lines 58-62 (the training engine 214 may use a loss function to evaluate how well the machine learning model 602 fits the given training samples so that the weights of the model can be updated to reduce the loss on the next evaluation.); col.34 lines 42-43 (using a loss function to evaluate the relevance)}.
Regarding claim 11, Applicant recites claim limitations of the same or substantially the same scope as that of claim 3. Accordingly, claim 11 is rejected in the same or substantially the same manner as claim 3, shown below.
Regarding claim 13, which depends on claim 7, Wang (‘410) discloses that in the system, the system is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine { Fig.1 item 110 (control system); Fig.13 item 1314 (Providing the second track associated with the second time to an autonomous vehicle control system for autonomous control of a vehicle 1314)};
a perception system for an autonomous or semi-autonomous machine { Fig.1 item 110 (control system); Fig.5 item 550 (detector); Fig.13 item 1314 (Providing the second track associated with the second time to an autonomous vehicle control system for autonomous control of a vehicle 1314)};
a system for performing collaborative content creation for 3D assets {col.10 lines 12-13 (3D positioning sensor data)};
a system for performing deep learning operations { col.11 line 60 (be deep learning networks)};
a system for hosting one or more real-time streaming applications { Col.21 lines 25-26 (track data can be stored and read in real time.)};
a system for generating synthetic data {Fig.5};
a system incorporating one or more virtual machines (VMs) {col.8 lines 20-21 (cloud-based, or client server computing environment)};
a system implemented at least partially in a data center {col.8 lines 20-21 (cloud-based, or client server computing environment)}; or
a system implemented at least partially using cloud computing resources {col.8 lines 20-21 (cloud-based, or client server computing environment)}.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 12, 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 10,976,410, hereafter Wang) in view of Long et al. (Long, Yunfei, Daniel Morris, Xiaoming Liu, Marcos Castro, Punarjay Chakravarty, and Praveen Narayanan. "Full-velocity radar returns by radar-camera fusion." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16198-16207. 2021, hereafter Long).
Regarding claim 1, Wang (‘410) discloses that A method {col.1 line 34 (a method for training a machine learning (ML))} comprising:
determining, using a machine learning (ML) model, an estimated velocity corresponding to an object as a whole based at least on measured RADAR data { Fig.6 items 517 (radar tracker), 602, 632 (machine learning model); Fig.12 item 1204 (Receiving, from one or more radar sensors, radar measurement data associated with the object in a second state at a second time which is later than the first time), 1208 (Applying the first information and the radar measurement data as training input to the machine learning model to generate a first predicted output of the machine learning model); col.14 lines 28-29 (the detector 550 can determine a size, a shape, a velocity, or a moving direction of an object), 53-55 (the radar tracker 517 determines tracks of different objects ( e.g., present position and velocity of different objects)); col.20 lines 54-56 (the learned machine learning model 602 may include a plurality of learned machine learning (ML) models 632.); Examiner’s note: “predicted output” for “estimated”} corresponding to a plurality of RADAR detections associated with {col.17 lines 1-2 (the radars receive measurements from the same target)};
reconstructing, as expected RADAR data, RADAR data corresponding to {Fig.6 item 634 (associator), 630 (track predictor), 512 (track manager), 639; col.19 lines 18-24 (the associated data 635 may be generated by the associator 634 based on at least one of (1) up-to-date radar measurement data ( e.g., the radar points 631 of up-to-date measurement data) or (2) a predicted state associated with the object based on a prior state of a track ( e.g., the predicted state data 659(of the predicted track of the object based) t2 on the track 655(t1)).); col.22 lines 37-38 (the track predictor 630 of the radar tracker 512 as feedback); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); col.25 lines 14-17 (track predictor 630 may generate predicted track data of the existing track (e.g., predicted state data 659) as indicated by the predicted bounding box 710 in FIG. 7.); Examiner’s note: “generate predicted track data” in col.25 lines 14-15 for claimed language “reconstructing, as expected RADAR data, RADAR data”. col.24 lines 19-20 and Fig.6 items 512, 630, and 639 for claimed language “based at least on one or more reverse calculations performed with respect to the estimated velocity” (see input of item 639 in Fig.6).}; and
updating one or more parameters of the ML model based at least on a difference between the measured RADAR data and the expected RADAR data {Fig.12 item 1210 (Updating one or more weights in the machine learning model by determining relevance between the first predicted output and the label of the radar measurement data); col.25 lines 58-62 (the training engine 214 may use a loss function to evaluate how well the machine learning model 602 fits the given training samples so that the weights of the model can be updated to reduce the loss on the next evaluation.); col.34 lines 42-43 (using a loss function to evaluate the relevance)},
wherein the ML model is used by a machine to determine a detected velocity of a detected object detected using one or more RADAR sensors of the machine {Fig.1; col.1 lines 33-34 (a system), 38-40 (using a trained or learned ML model that receives as input radar measurement data from radar sensors and a predicted state associated with the object); col.2 lines 47-48 (Fig.1, a system environment for autonomous vehicles); col.14 lines 28-29 (the detector 550 can determine a size, a shape, a velocity, or a moving direction of an object), 53-55 (the radar tracker 517 determines tracks of different objects ( e.g., present position and velocity of different objects),)}.
However, Wang (‘410) does not explicitly disclose (see words with underline) “measured RADAR data corresponding to a plurality of RADAR detections associated with different portions of the object” and “RADAR data corresponding to the different portions of the object”. In the same field of endeavor, Long (‘NPL) discloses that
measured RADAR data corresponding to a plurality of RADAR detections associated with different portions of the object;
RADAR data corresponding to the different portions of the object;
{page 7 Figure 4 (c)}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wang (‘410) with the teachings of Long (‘NPL) {process each individual radar points (even the radar points associate a same object)} to process each individual radar points (even the radar points associate a same object). Doing so would provide a closed-form solution for the point-wise, full-velocity estimate of Doppler returns so as to significantly improve over the state-of-the-art in velocity estimation and accumulation of radar points, as recognized by Long (‘NPL) {page 1 abstract lines 7-9 (a closed-form solution for the point-wise, full-velocity estimate of Doppler returns), 14-16 (significant improvements over the state-of-the-art in velocity estimation and accumulation of radar points)}.
Regrading claim 2, which depends on claim 1, Wang (‘410) does not explicitly disclose “the estimated velocity corresponding to the object includes one or more estimated velocities individually corresponding to one or more portions of the object”. In the same field of endeavor, Long (‘NPL) discloses that in the method,
the estimated velocity corresponding to the object includes one or more estimated velocities individually corresponding to one or more portions of the object {Fig.4 (c) selected radar projections , predicted mapping from raw radar projections ; page 5 left column lines 1-2 from bottom (train a neural network model, to estimate associated radar); page 5 right column lines 1-2 ( pixels in the neighborhood of raw projection and identify occluded radar points)}.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wang (‘410) with the teachings of Long (‘NPL) {process each individual radar points (even the radar points associate a same object)} to process each individual radar points (even the radar points associate a same object). Doing so would provide a closed-form solution for the point-wise, full-velocity estimate of Doppler returns so as to significantly improve over the state-of-the-art in velocity estimation and accumulation of radar points, as recognized by Long (‘NPL) {page 1 abstract lines 7-9 (a closed-form solution for the point-wise, full-velocity estimate of Doppler returns), 14-16 (significant improvements over the state-of-the-art in velocity estimation and accumulation of radar points)}.
Regarding claim 3, which depends on claims 1-2, the combination of Wang (‘410) and Long (‘NPL) discloses that in the method,
the on one or more respective reverse calculations are performed with respect to the one or more estimated velocities {see Wang (‘410) Fig.6 item 630 (track predictor) is feedback from item 632 (ML) via item 639; col.12 lines 3-5 (one or more weights may be updated by backpropagating the difference over the entire neural network model 212); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); col.25 lines 14-17 (track predictor 630 may generate predicted track data of the existing track (e.g., predicted state data 659) as indicated by the predicted bounding box 710 in FIG. 7.)}.
Regarding claim 4, which depends on claim 1, the combination of Wang (‘410) and Long (‘NPL) discloses that in the method,
the estimated velocity is estimated prior to an ability to estimate a velocity corresponding to the object by identifying a plurality of bounding shapes corresponding to the object {see Wang (‘410) Fig.6 item 630 (track predictor) operates after getting input from item 632 (ML); Fig.7 (predicted bounding box); Fig.11B items 1131, 1151; col.28 lines 2-3 (track data representing a single point (e.g., a center point) of a track of the target object) , 50-51 (first information can include position and velocity of the target object at time t1)}, individual bounding shapes of the plurality of bounding shapes associated with respective estimations of respective positions corresponding to the object {Fig.7 (predicted bounding box, radar point, ground truth bounding box)}.
Regarding claim 5, which depends on claim 1, the combination of Wang (‘410) and Long (‘NPL) discloses that in the method,
the measured RADAR data corresponds to a plurality of RADAR sensors {see Wang (‘410) Fig.12 item 1204 (Receiving, from one or more radar sensors, radar measurement data associated with the object)}.
Regarding claim 6, which depends on claim 1, the combination of Wang (‘410) and Long (‘NPL) discloses that in the method,
the ML model is updated to iteratively generate one or more additional estimated velocities until the difference between the measured RADAR data and the expected RADAR data reaches a particular threshold { see Wang (‘410) Fig.12 items 1208 (Applying the first information and the radar measurement data as training input to the machine learning model to generate a first predicted output of the machine learning model), 2010 (Updating one or more weights in the machine learning model by determining relevance between the first predicted output and the label of the radar measurement data); Fig.14 feedback loop, item 1408 (machine learning model and generating an output), 1410 (Training the machine learning model based on a relevance between the output and a ground truth label); col.34 lines 50-52 (determining that the performance of the machine learning model can be further improved, steps 1402 through 1410 may be repeated); Examiner’s note: Fig.12 item 2010 for “the ML model is updated”, which is in Fig.14 item 1410. Feedback in Fig.14 for “iteratively”. Fig.14 item 1408 generate “estimated velocities”. “determining that the performance of the machine learning model can be further improved or converged” for “the difference between the measured RADAR data and the expected RADAR data reaches a particular threshold”}.
Wang (‘410) discloses the claimed invention except for “a particular threshold”. It would have been obvious to one having ordinary skill in the art at the time the invention was made to use “a particular threshold” to determine that the performance of the machine learning model can or cannot be further improved or converged in Wang (‘410) that one skilled in the art would have expected them to have the same properties and further being motivated to properly select “a particular threshold” so as to decide to repeat processing or to end the processing, as recognized by Wang (‘410) {Fig.14}.
Regarding claim 12, Applicant recites claim limitations of the same or substantially the same scope as that of claim 2. Accordingly, claim 12 is rejected in the same or substantially the same manner as claim 2, shown above.
Regarding claim 14, as modified above, Wang (‘410) discloses that One or more processors {Fig.1 item 122 (processor), 172 (computing system)} comprising processing circuitry to perform one or more operations using a velocity estimate generated using a machine learning (ML) model {Fig.1; Fig.2 items 210 (neural network engine), 212 (neural network model), 214 (training engine); Fig.6 item 517 (radar tracker), 602, 632 (machine learning model); col.20 lines 54-56 (the learned machine learning model 602 may include a plurality of learned machine learning (ML) models 632.); col.24 lines 19-20 (track data which may include at least one of position and amplitude of the radar, velocity of the object); }, wherein the ML model is trained by:
determining an estimated velocity corresponding to an object as a whole based at least on measured RADAR data corresponding to a plurality of RADAR detections associated with different portions of the object;
reconstructing RADAR data based at least on the estimated velocity to determine expected RADAR data corresponding to the different portions of the object; and
updating one or more parameters of the ML model based at least on a difference between the measured RADAR data and the expected RADAR data, wherein the ML model is used by a machine to determine a detected velocity of a detected object detected using one or more RADAR sensors of the machine.
{The claim limitations above are the same or substantially the same scope as the corresponding claim limitations in claim 1. Therefore the claim limitations above are rejected in the same or substantially the same manner as in claim 1. See the rejections of claim 1}.
Regarding claim 15, Applicant recites claim limitations of the same or substantially the same scope as that of claim 2. Accordingly, claim 15 is rejected in the same or substantially the same manner as claim 2, shown above.
Regarding claim 16, which depends on claim 14, the combination of Wang (‘410) and Long (‘NPL) discloses that in the processor, the one or more operations comprising:
generating a control command based at least on the estimated velocity corresponding to the object, the control command directing one or more systems associated with the processor to perform one or more operations {see Wang (‘410) Fig.1 item 110 (control system); Fig.2 item 208 (vehicle data); Fig.13 item 1314 (Providing the second track associated with the second time to an autonomous vehicle control system for autonomous control of a vehicle 1314); col.2 lines 19-21 (instructions, execution of the instructions by one or more processors, cause one or more processors to perform operations), 36-38 (operations further include providing the second track associated with the second time to an autonomous vehicle control system for autonomous control of a vehicle); col.3 lines 25-28 (techniques for controlling an autonomous vehicle using a trained or learned machine learning (ML) model and training the ML model using training data.); col.5 lines 65-67 (each processor 122 configured to execute program code instructions); col.6 lines 3-5 (Sensors 130 may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle.) }.
Regarding claim 17, Applicant recites claim limitations of the same or substantially the same scope as that of claim 4. Accordingly, claim 17 is rejected in the same or substantially the same manner as claim 4, shown above.
Regarding claim 18, Applicant recites claim limitations of the same or substantially the same scope as that of the combination of claims 2 and 3. Accordingly, claim 18 is rejected in the same or substantially the same manner as claims 2 and 3, shown above.
Regarding claim 19, Applicant recites claim limitations of the same or substantially the same scope as that of claim 6. Accordingly, claim 19 is rejected in the same or substantially the same manner as claim 6, shown above.
Regarding claim 20, Applicant recites claim limitations of the same or substantially the same scope as that of claim 13. Accordingly, claim 20 is rejected in the same or substantially the same manner as claim 13, shown above.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Wang (‘410) as applied to claim 7 above, and further in view of Matuszak et al. (US 12,158,991, hereafter Matuszak).
Regarding claim 8, which depends on claim 7, Wang (‘410) does not explicitly disclose “the time range is within 50 milliseconds”. In the same field of endeavor, Matuszak discloses that in the system,
the time range is within 50 milliseconds { Col.11 lines 18-20 (a duration of the main frame 314 may be on the order of milliseconds or seconds (e.g., between approximately 10 ms and 10 seconds (s)).}.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wang (‘410) with the teachings of Matuszak (‘991) {use a type of radar with data collection frame on the order of milliseconds or seconds (e.g. between approximately 10 ms and 10 seconds) } use a type of radar with data collection frame on the order of milliseconds or seconds (e.g. between approximately 10 ms and 10 seconds). Doing so would perform a certain radar function as designed so as to effectively operate radar without some limitation (e.g. available power, signal to noise ratio performance, etc.), as recognized by Matuszak (‘991) {col.1 lines 28-30 (utilizing the radar may not be realized with the effective operation of the radar curtailed or disabled due to limitations of available power), 32 (a radar's design or operation), 37-39 (radar's design may result in degraded signal-to-noise ratio performance, which may make it challenging to achieve sufficient accuracies for some applications)}.
Claim 10 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (‘410) as applied to claim 9 above.
Regarding claim 10, Applicant recites claim limitations of the same or substantially the same scope as that of claim 6. Accordingly, claim 10 is rejected in the same or substantially the same manner as claim 6, shown above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2022/0196798 discloses that “reconstructing RADAR data based at least on velocity estimated by an ML model” {Fig.3; Fig.31; [0116] lines 2-3 (feedback controller 316 may include an artificial intelligence engine , which may be trainable ); [0125] lines 2-5 (combination of radar processor 309 and feedback controller 316 , for example , to provide a reinforcement learning based feedback control engine ,); [0135] lines 1-3 (The radar processor may determine range , speed ( radial velocity ) and direction information of one or more objects from the digital reception data values .); [0361] lines 2-4 (learnable architecture is provided that reconstructs ( e.g. 4D ) radar images from ( raw ) digital ( radar ) reception data received from the radar ADC); [0365] lines 9-10 (reconstruct radar voxels and / or point - cloud images to arbitrary resolution and perform perception tasks such as object) }, which further support the rejections of claims 1, 7, and 14.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YONGHONG LI whose telephone number is (571)272-5946. The examiner can normally be reached 8:30am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vladimir Magloire can be reached at (571)270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YONGHONG LI/ Examiner, Art Unit 3648