DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-20 are pending and presented for examination.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The representative claim 11 recites:
A system for verifying tracking information, the system comprising:
a memory configured to store verification information of each of tracking channels; and
a processor electrically connected or communicatively connected to the memory, wherein the processor is configured to:
update the verification information of each of the tracking channels stored in the memory based on output information corresponding to object tracking information of each of the tracking channels;
generate reliability of the verification information of each of the tracking channels; and
perform output verification of at least one tracking function of a first tracking channel among the tracking channels by using a verification protocol for the output verification of the at least one tracking function of the first tracking channel based on the reliability of the verification information.
The claim limitations in the abstract idea have been highlighted in bold above; the remaining limitations are “additional elements”.
Under step 1 of the eligibility analysis, we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: process, machine, manufacture, or composition of matter. The above claims are considered to be in a statutory category (process).
Under Step 2A, Prong One, we consider whether the claim recites a judicial exception (abstract idea). In the above claim, the highlighted portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitation that fall into/recite abstract idea exceptions. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter that, when recited as such in a claim limitation, covers mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) and/or mental processes – concepts performed in the human mind including an observation, evaluation, judgement, and/or opinion.
Next, under Step 2A, Prong Two, we consider whether the claim that recites a judicial exception is integrated into a practical application. In this step, we evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception.
This judicial exception is not integrated into a practical application because the additional limitations in the claim are only: a memory configured to store verification information of each of tracking channels; and a processor electrically connected or communicatively connected to the memory. However, the limitations “a memory configured to store verification information of each of tracking channels; and a processor electrically connected or communicatively connected to the memory” are recited at a high level of generality (i.e., storing and processing data using a computer components) such that they amount no more than mere instructions to apply the exception using a generic computer components.
Finally, under Step 2B, we consider whether the additional elements are sufficient to amount to significantly more than the abstract idea.
Claim 11 does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as noted above, the additional limitations recited at a high level of generality (i.e., as a generic computer components storing data in memory, and processing data using a generic computer processor). Further, the additional elements are conventional in the art, as evidenced by the art of record (see, Ramakrishnan et al. US 2021/0331695 (hereinafter, Ramakrishnan), ([0079]-[0080]), and Peeters et al. US 2024/0219538 (hereinafter, Peeters), (Fig. 1). Therefore, claim 11 is directed to an abstract idea without significantly more.
The claim is not patent eligible.
Independent claim 1, the claim is rejected with the same rationale as in claim 11 as explained above.
Dependent claims 2-10 and 12-20, add further details of the identified abstract idea. The claims are not patent eligible.
Claim Rejections - 35 USC § 102
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
6. Claims 1-9 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by
Ramakrishnan et al. US 2021/0331695 (hereinafter, Ramakrishnan).
7. Regarding claim 1, Ramakrishnan discloses a method for verifying tracking information, the method comprising:
updating verification information of each of tracking channels based on output information corresponding to object tracking information of each of the tracking channels ([0020], [0022]: the object detection and tracking framework 100 performs these functions by ingesting, retrieving, requesting, receiving, acquiring or otherwise obtaining input data 110 from multiple detection systems, comprising a plurality of sensors that have been configured and initialized to observe one or more fields of view 103 around autonomously-operated vehicles 102, … As noted further herein, many types of sensors may be utilized in these detection systems, these sensors may be deployed on, or proximate to, the autonomously-operated vehicles 102. Additionally, it is to be understood that input data 110 may be collected from either these on-board/proximate detection systems, and/or from one or more external third-party sources…As with input data 110 obtained from cameras 112, information from ranging systems such as radar 114, and LiDar 116 may be in either raw or processed form….[Further] [0043], [0045]- [0046]: At the conclusion of assigning a measurement from any given sensor to an existing track 105 performed in the sensor fusion 153 and multi-object tracking 154 aspects of the deep learning model 140, the object tracking module 144 generates three possible outputs—matched detections, unmatched detections, and unmatched tracks. Matched detections are detections/measurements that match an existing track 105 during assignment. These are the detections that are used by the object tracking module 144 to update existing tracks 105 with the latest values….Matched detections as noted above are used to update tracks 105. Depending on the sensor the detection comes from, only certain values updated; the values that a sensor measurement can update is based on how accurate the value is, given the sensor providing the measurement. For example, radar 114 is uncertain about the label of the object 104, but camera 112 provides greater certainty. Similarly, the camera 112 does not enable a good estimate of the velocity of the object 104, but radar 114 does. Hence each sensor only updates a particular set of values in the track 105, as follows. If the matched detection is from camera 112 then the following important track parameters are updated: class label, bounding box, angle, distance and score. If the matched detection is from radar 114 then the following important track parameters are updated: velocity, angle, and distance. If the matched detection is from LiDar 116 then the following important track parameters are updated: bounding box, angle, and distance);
generating reliability of the verification information of each of the tracking channels ([0009], [0045], [0056], [0059],); and
performing output verification of at least one tracking function of a first tracking channel among the tracking channels by using a verification protocol for the output verification of the at least one tracking function of the first tracking channel based on the reliability of the verification information ([0009], [0045]-[0046], [0056], [0059]).
8. Regarding claim 2, Ramakrishnan discloses the method for verifying tracking information of claim 1, as disclosed above.
Ramakrishnan further discloses determining the first tracking channel as an abnormal tracking channel based on the output verification of the at least one tracking function and performing post-processing on the abnormal tracking channel ([0043], [0048]: At the conclusion of assigning a measurement from any given sensor to an existing track 105 performed in the sensor fusion 153 and multi-object tracking 154 aspects of the deep learning model 140, the object tracking module 144 generates three possible outputs—matched detections, unmatched detections, and unmatched tracks… Unmatched detections are detections/measurements that did not match with any of the tracks 105 during assignment. If these detections are from camera 112, they are used to spawn new tracks 105 into the object tracking module 144. Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment. These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time).
8. Regarding claim 3, Ramakrishnan discloses the method for verifying tracking information of claim 2, as disclosed above.
Ramakrishnan further discloses wherein the post-processing includes at least one of adjusting reliability of the abnormal tracking channel, switching to a memory track to maintain a track of the abnormal tracking channel without outputting the track of the abnormal tracking channel, or deleting the abnormal tracking channel ([0043], [0048]: At the conclusion of assigning a measurement from any given sensor to an existing track 105 performed in the sensor fusion 153 and multi-object tracking 154 aspects of the deep learning model 140, the object tracking module 144 generates three possible outputs—matched detections, unmatched detections, and unmatched tracks… Unmatched detections are detections/measurements that did not match with any of the tracks 105 during assignment. If these detections are from camera 112, they are used to spawn new tracks 105 into the object tracking module 144. Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment. These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time…Unmatched tracks 105 are either objects 104 that the sensor-based detection systems were previously detecting, but no longer exist in their field of view 103, or objects 104 that exist but the sensors failed to detect briefly. To account for both cases, the object tracking module 144 filters out their tracks 105 by removing them from a list of tracks 105 after a decay period (i.e., deleting the abnormal tracking channel)).
8. Regarding claim 4, Ramakrishnan discloses the method for verifying tracking information of claim 1, as disclosed above.
Ramakrishnan further discloses wherein the updating of the verification information of each of the tracking channels includes performing at least one of updating tracking verification information for each of the tracking channels or generating classification information for template matching verification based on whether a target object of each of the tracking channels is a stationary object or a moving object ([0045]- [0046]: depending on the sensor the detection comes from, only certain values updated; the values that a sensor measurement can update is based on how accurate the value is, given the sensor providing the measurement. For example, radar 114 is uncertain about the label of the object 104, but camera 112 provides greater certainty. Similarly, the camera 112 does not enable a good estimate of the velocity of the object 104, but radar 114 does. Hence each sensor only updates a particular set of values in the track 105, as follows. If the matched detection is from camera 112 then the following important track parameters are updated: class label, bounding box, angle, distance and score. If the matched detection is from radar 114 then the following important track parameters are updated: velocity, angle, and distance. If the matched detection is from LiDar 116 then the following important track parameters are updated: bounding box, angle, and distance (i.e., performing at least one of updating tracking verification information for each of the tracking channels)).
9. Regarding claim 5, Ramakrishnan discloses the method for verifying tracking information of claim 4, as disclosed above.
Ramakrishnan further discloses wherein the updating of the tracking verification information includes: updating tracking trace information including a position, a velocity, a heading, a width, a length, a classification information, and a tracking reliability or a status information of the target object of a corresponding tracking channel based on the output information of the corresponding tracking channel ([0045]-[0046], [0056]);
updating tracking verification information including a heading, an average width, an average length, an average velocity, or a maximum movement distance of the target object of the corresponding tracking channel based on the updated tracking trace information ([0025], [0045]-[0046]);
determining an accumulated reliability score of the classification information of the target object of the corresponding tracking channel based on the updated tracking verification information to update the classification information for verification ([0038], [0045]-[0046], [0056]); and
updating a tracking box for verification based on the updated tracking verification information ([0043], [0045]-[0046]).
10. Regarding claim 6, The method for verifying tracking information of claim 5, wherein the generating of the classification information for the template matching verification includes: generating a point grid map based on Light Detection and Ranging (LiDAR) points of the target object of the corresponding tracking channel; and determining and outputting classification information of the target object of the corresponding tracking channel based on a matching result between pre-stored template information and the point grid map (Examiner’s Note: Claim 6 depends on dependent claim 5, and claim 5 depends on dependent claim 4. Claim 4 listed one or more updating of the verification information and require selecting at least one of the updating of the verification information. For examination purpose, the Examiner select one from the alternative lists, that is the limitation “updating tracking verification information for each of the tracking channels” to address claims 4 and 5. However, claim 6 is drawn from one of the alternative updating of the verification information, that is the limitation “generating classification information for template matching verification..” which were not selected for examination. Therefore, limitation of claim 6 does not distinguish patentability over the prior art applied).
11. Regarding claim 7, Ramakrishnan discloses the method for verifying tracking information of claim 1, as disclosed above.
Ramakrishnan further discloses wherein the generating of the reliability of the verification information includes: generating a reliability of the tracking trace information for verification of each of the tracking channels based on whether the tracking trace information of each of the tracking channels exists in a driving road of a host vehicle ([0045]-[0046], [0056]);
generating a reliability of the classification information for verification of each of the tracking channels based on accumulated reliability of classification information of the target object of each of the tracking channels and a reliability of matching-based classification information of a point grid map of each of the tracking channels and pre-stored template information for each moving object ([0062], [0045]-[0046], [0056]); and
determining the reliability of the verification information based on at least one of a number of updates of the tracking trace information of each of the tracking channels, the reliability of the tracking trace information for verification, or the reliability of the classification information for verification ([0045]-[0046], [0056]).
12. Regarding claim 8, Ramakrishnan discloses the method for verifying tracking information of claim 1, as disclosed above.
Ramakrishnan further discloses wherein the verification protocol includes at least one of: a first verification of appropriateness of a memory track of the first tracking channel for maintaining a track of the first tracking channel without outputting the track of the first tracking channel; a second verification of appropriateness of an output including at least one of appropriateness of a determination that a target object of the first tracking channel is a moving object or a stationary object or appropriateness of output information of the first tracking channel; or a third verification of tracking issue performance of the first tracking channel ([0025], [0056], [0059]: The first vehicular state data 120 and the second vehicular state data 121 may also include operational characteristics associated with vehicular movement, such as speed 124, heading 125, yaw-rate 126, and curvature 127… Once a validation 160 of detected object 104 and an associated track 105 has been confirmed by the confidence assessment module 146 of the deep learning model 140, the framework 100 then performs an assessment of the operational mode 162 of the autonomously-operated vehicle 102. This assessment may include determining whether to maintain or adjust an operational mode, such as a first vehicular state 121 or a second vehicular state 122, in response to a detected/tracked object 104 (i.e., “a second verification of appropriateness of an output including at least one of appropriateness of a determination that a target object of the first tracking channel is a moving object or a stationary object or appropriateness of output information of the first tracking channel” within the claim. See also ([0045], [0056])).
13. Regarding claim 9, The method for verifying tracking information of claim 8, wherein the third verification of the tracking issue performance of the first tracking channel includes at least one of: a verification as to whether the track of the target object of the first tracking channel is a track of an object including exhaust gas; a verification as to whether the track of the target object of the first tracking channel is an occlusion track; a verification as to whether the track of the target object of the first tracking channel is a track representing a pedestrian; a verification as to whether the track of the target object of the first tracking channel is a track representing a one-ton truck, or a verification as to whether the track of the target object of the first tracking channel is a track of a sensor boundary region (Examiner’s Note: Claim 9 depends on dependent claim 8. Claim 8 listed one or more verification protocols and require selecting at least one of the verification protocols. For examination purpose, the Examiner select one from the alternative lists, that is the limitation “a second verification of appropriateness of an output including at least one of appropriateness of a determination that a target object of the first tracking channel is a moving object or a stationary object or appropriateness of output information of the first tracking channel” to address claim 8. However, claim 9 is drawn from one of the alternative verification protocols, that is the limitation “a third verification of tracking issue performance of the first tracking channel” which were not selected for examination. Therefore, limitation of claim 9 does not distinguish patentability over the prior art applied).
Claim Rejections - 35 USC § 103
14. In the event the determination of the status of the application as subject to AlA 35 U.S.C. 102 and 103 (or as subject to pre-AlA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
15. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ramakrishnan, in view of Peeters et al. US 2024/0219538 (hereinafter, Peeters).
16. Regarding claim 10, Ramakrishnan discloses the method for verifying tracking information of claim 1, as disclosed above.
Ramakrishnan does not disclose:
wherein the output verification of the at least one tracking function of the first tracking channel is performed in response that the reliability of the verification information is a highest level among predetermined levels, and wherein the method further includes: in response that the reliability of the verification information is lower than the highest level, checking object tracking information of a corresponding tracking channel among the tracking channels based on verification information of the corresponding tracking channel.
However, Peeters discloses:
wherein the output verification of the at least one tracking function of the first tracking channel is performed in response that the reliability of the verification information is a highest level among predetermined levels ([0116], [0119], [0122]), and wherein the method further includes: in response that the reliability of the verification information is lower than the highest level, checking object tracking information of a corresponding tracking channel among the tracking channels based on verification information of the corresponding tracking channel ([0116], [0119], [0122]).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Ramakrishnan to use wherein the output verification of the at least one tracking function of the first tracking channel is performed in response that the reliability of the verification information is a highest level among predetermined levels, and wherein the method further includes: in response that the reliability of the verification information is lower than the highest level, checking object tracking information of a corresponding tracking channel among the tracking channels based on verification information of the corresponding tracking channel as taught by Peeters. The motivation for doing so would have been in order to verify reliability of object tracking information efficiently (Peeters, [0116]).
17. Regarding claim 20, Ramakrishnan discloses the system for verifying tracking information of claim 11, as disclosed above.
Ramakrishnan does not disclose:
wherein the processor is further configured to: perform the output verification of the at least one tracking function of the first tracking channel when the reliability of the verification information is a level 3 among predetermined levels; and check object tracking information of a corresponding tracking channel based on the verification information of the corresponding tracking channel of the tracking channels when the reliability of the verification information is a level 2 lower than the level 3.
However, Peeters discloses:
wherein the processor is further configured to: perform the output verification of the at least one tracking function of the first tracking channel when the reliability of the verification information is a level 3 among predetermined levels ([0116], [0119], [0122]), and check object tracking information of a corresponding tracking channel based on the verification information of the corresponding tracking channel of the tracking channels when the reliability of the verification information is a level 2 lower than the level 3 ([0116], [0119], [0122]).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Ramakrishnan to use wherein the processor is further configured to: perform the output verification of the at least one tracking function of the first tracking channel when the reliability of the verification information is a level 3 among predetermined levels and check object tracking information of a corresponding tracking channel based on the verification information of the corresponding tracking channel of the tracking channels when the reliability of the verification information is a level 2 lower than the level 3 as taught by Peeters. The motivation for doing so would have been in order to verify reliability of object tracking information efficiently (Peeters, [0116]).
18. Claims 11-19 are rejected under 35 U.S.C. 103 as being unpatentable over Ramakrishnan.
19. Regarding claim 11, Ramakrishnan discloses a system for verifying tracking information, the system comprising:
a memory ([0079]), [and] verification information of each of tracking channels; and
a processor electrically connected or communicatively connected to the memory ([0079], Fig. 1), wherein the processor is configured to:
update the verification information of each of the tracking channels based on output information corresponding to object tracking information of each of the tracking channels ([0020], [0022]: the object detection and tracking framework 100 performs these functions by ingesting, retrieving, requesting, receiving, acquiring or otherwise obtaining input data 110 from multiple detection systems, comprising a plurality of sensors that have been configured and initialized to observe one or more fields of view 103 around autonomously-operated vehicles 102, … As noted further herein, many types of sensors may be utilized in these detection systems, these sensors may be deployed on, or proximate to, the autonomously-operated vehicles 102. Additionally, it is to be understood that input data 110 may be collected from either these on-board/proximate detection systems, and/or from one or more external third-party sources…As with input data 110 obtained from cameras 112, information from ranging systems such as radar 114, and LiDar 116 may be in either raw or processed form….[Further] [0043], [0045]- [0046]: At the conclusion of assigning a measurement from any given sensor to an existing track 105 performed in the sensor fusion 153 and multi-object tracking 154 aspects of the deep learning model 140, the object tracking module 144 generates three possible outputs—matched detections, unmatched detections, and unmatched tracks. Matched detections are detections/measurements that match an existing track 105 during assignment. These are the detections that are used by the object tracking module 144 to update existing tracks 105 with the latest values….Matched detections as noted above are used to update tracks 105. Depending on the sensor the detection comes from, only certain values updated; the values that a sensor measurement can update is based on how accurate the value is, given the sensor providing the measurement. For example, radar 114 is uncertain about the label of the object 104, but camera 112 provides greater certainty. Similarly, the camera 112 does not enable a good estimate of the velocity of the object 104, but radar 114 does. Hence each sensor only updates a particular set of values in the track 105, as follows. If the matched detection is from camera 112 then the following important track parameters are updated: class label, bounding box, angle, distance and score. If the matched detection is from radar 114 then the following important track parameters are updated: velocity, angle, and distance. If the matched detection is from LiDar 116 then the following important track parameters are updated: bounding box, angle, and distance);
generate reliability of the verification information of each of the tracking channels ([0009], [0045], [0056], [0059]); and
perform output verification of at least one tracking function of a first tracking channel among the tracking channels by using a verification protocol for the output verification of the at least one tracking function of the first tracking channel based on the reliability of the verification information ([0009], [0045]-[0046], [0056], [0059]).
Ramakrishnan does not disclose:
a memory configured to store verification information.
However, Ramakrishnan discloses:
“the measurements from all of the sensors are stored in the same queue from which each measurement is processed in the order they come into the queue. The measurement may be from any of the three sensors—camera 112, radar 114, and LiDar 116—and any other sensors utilized in the framework 100. The assignment metric that correlates measurements to tracks 105 varies, depending on the sensor that the measurement came from… The systems and methods of the present disclosure may also be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like….Additionally, the data processing functions disclosed herein may be performed by one or more program instructions stored in or executed by such memory, and further may be performed by one or more modules configured to carry out those program instructions.” (See, [0039], [0079]-[0080]).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Ramakrishnan to use a memory configured to store verification information as taught by Ramakrishnan. The motivation for doing so would have been in order to process and store verification information of each of tracking channel efficiently (Ramakrishnan, [0039]).
20. Regarding claim 12, Ramakrishnan discloses the system for verifying tracking information of claim 11, as disclosed above.
Ramakrishnan further discloses determine the first tracking channel as an abnormal tracking channel based on the output verification of the at least one tracking function and performing post-processing on the abnormal tracking channel ([0043], [0048]: At the conclusion of assigning a measurement from any given sensor to an existing track 105 performed in the sensor fusion 153 and multi-object tracking 154 aspects of the deep learning model 140, the object tracking module 144 generates three possible outputs—matched detections, unmatched detections, and unmatched tracks… Unmatched detections are detections/measurements that did not match with any of the tracks 105 during assignment. If these detections are from camera 112, they are used to spawn new tracks 105 into the object tracking module 144. Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment. These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time).
21. Regarding claim 13, Ramakrishnan discloses the system for verifying tracking information of claim 12, as disclosed above.
Ramakrishnan further discloses wherein processor is further configured to perform at least one of: adjusting reliability of the abnormal tracking channel, switching to a memory track to maintain a track of the abnormal tracking channel without outputting the track of the abnormal tracking channel, or deleting the abnormal tracking channel ([0043], [0048]: At the conclusion of assigning a measurement from any given sensor to an existing track 105 performed in the sensor fusion 153 and multi-object tracking 154 aspects of the deep learning model 140, the object tracking module 144 generates three possible outputs—matched detections, unmatched detections, and unmatched tracks… Unmatched detections are detections/measurements that did not match with any of the tracks 105 during assignment. If these detections are from camera 112, they are used to spawn new tracks 105 into the object tracking module 144. Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment. These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time…Unmatched tracks 105 are either objects 104 that the sensor-based detection systems were previously detecting, but no longer exist in their field of view 103, or objects 104 that exist but the sensors failed to detect briefly. To account for both cases, the object tracking module 144 filters out their tracks 105 by removing them from a list of tracks 105 after a decay period (i.e., deleting the abnormal tracking channel)).
22. Regarding claim 14, Ramakrishnan discloses the system for verifying tracking information of claim 11, as disclosed above.
Ramakrishnan further discloses wherein processor is further configured to update the verification information of each of the tracking channels includes performing at least one of updating tracking verification information for each of the tracking channels or generating classification information for template matching verification based on whether a target object of each of the tracking channels is a stationary object or a moving object ([0045]- [0046]: depending on the sensor the detection comes from, only certain values updated; the values that a sensor measurement can update is based on how accurate the value is, given the sensor providing the measurement. For example, radar 114 is uncertain about the label of the object 104, but camera 112 provides greater certainty. Similarly, the camera 112 does not enable a good estimate of the velocity of the object 104, but radar 114 does. Hence each sensor only updates a particular set of values in the track 105, as follows. If the matched detection is from camera 112 then the following important track parameters are updated: class label, bounding box, angle, distance and score. If the matched detection is from radar 114 then the following important track parameters are updated: velocity, angle, and distance. If the matched detection is from LiDar 116 then the following important track parameters are updated: bounding box, angle, and distance (i.e., performing at least one of updating tracking verification information for each of the tracking channels)).
23. Regarding claim 15, Ramakrishnan discloses the system for verifying tracking information of claim 14, as disclosed above.
Ramakrishnan further discloses wherein processor is further configured to update of the tracking verification information includes: update tracking trace information including a position, a velocity, a heading, a width, a length, a classification information, and a tracking reliability or a status information of the target object of a corresponding tracking channel based on the output information of the corresponding tracking channel ([0045]-[0046], [0056]);
update tracking verification information including a heading, an average width, an average length, an average velocity, or a maximum movement distance of the target object of the corresponding tracking channel based on the updated tracking trace information ([0025], [0045]-[0046]);
determine an accumulated reliability score of the classification information of the target object of the corresponding tracking channel based on the updated tracking verification information to update the classification information for verification ([0038], [0045]-[0046], [0056]); and
update a tracking box for verification based on the updated tracking verification information ([0043], [0045]-[0046]).
24. Regarding claim 16, The system for verifying tracking information of claim 15, wherein the processor is further configured to generate a point grid map based on Light Detection and Ranging (LiDAR) points of the target object of the corresponding tracking channel, determine the classification information of the target object of the corresponding tracking channel based on a matching result between pre-stored template information for each moving object and the point grid map, and output classification information for template matching verification (Examiner’s Note: Claim 16 depends on dependent claim 15, and claim 15 depends on dependent claim 14. Claim 14 listed one or more updating of the verification information and require selecting at least one of the updating of the verification information. For examination purpose, the Examiner select one from the alternative lists, that is the limitation “update the verification information of each of the tracking channels…” to address claims 14 and 15. However, claim 16 is drawn from one of the alternative updating of the verification information, that is the limitation “generating classification information for template matching verification..” which were not selected for examination. Therefore, limitation of claim 6 does not distinguish patentability over the prior art applied).
25. Regarding claim 17, Ramakrishnan discloses the system for verifying tracking information of claim 11, as disclosed above.
Ramakrishnan further discloses generate of the reliability of the verification information includes: generating a reliability of the tracking trace information for verification of each of the tracking channels based on whether the tracking trace information of each of the tracking channels exists in a driving road of a host vehicle ([0045]-[0046], [0056]);
generate a reliability of the classification information for verification of each of the tracking channels based on accumulated reliability of classification information of the target object of each of the tracking channels and a reliability of matching-based classification information of a point grid map of each of the tracking channels and pre-stored template information for each moving object ([0062], [0045]-[0046], [0056]); and
determine the reliability of the verification information based on at least one of a number of updates of the tracking trace information of each of the tracking channels, the reliability of the tracking trace information for verification, or the reliability of the classification information for verification ([0045]-[0046], [0056]).
26. Regarding claim 18, Ramakrishnan discloses the system for verifying tracking information of claim 11, as disclosed above.
Ramakrishnan further discloses wherein the verification protocol includes at least one of: a first verification of appropriateness of a memory track of the first tracking channel for maintaining a track of the first tracking channel without outputting the track of the first tracking channel; a second verification of appropriateness of an output including at least one of appropriateness of a determination that a target object of the first tracking channel is a moving object or a stationary object or appropriateness of output information of the first tracking channel; or a third verification of tracking issue performance of the first tracking channel ([0025], [0056], [0059]: The first vehicular state data 120 and the second vehicular state data 121 may also include operational characteristics associated with vehicular movement, such as speed 124, heading 125, yaw-rate 126, and curvature 127… Once a validation 160 of detected object 104 and an associated track 105 has been confirmed by the confidence assessment module 146 of the deep learning model 140, the framework 100 then performs an assessment of the operational mode 162 of the autonomously-operated vehicle 102. This assessment may include determining whether to maintain or adjust an operational mode, such as a first vehicular state 121 or a second vehicular state 122, in response to a detected/tracked object 104 (i.e., “a second verification of appropriateness of an output including at least one of appropriateness of a determination that a target object of the first tracking channel is a moving object or a stationary object or appropriateness of output information of the first tracking channel” within the claim. See also ([0045], [0056])).
27. Regarding claim 19, The system for verifying tracking information of claim 18, wherein the third verification of the tracking issue performance of the first tracking channel includes at least one of: a verification as to whether the track of the target object of the first tracking channel is a track of an object including exhaust gas; a verification as to whether the track of the target object of the first tracking channel is an occlusion track; a verification as to whether the track of the target object of the first tracking channel is a track representing a pedestrian; a verification as to whether the track of the target object of the first tracking channel is a track representing a one-ton truck, or a verification as to whether the track of the target object of the first tracking channel is a track of a sensor boundary region (Examiner’s Note: Claim 19 depends on dependent claim 18. Claim 18 listed one or more verification protocols and require selecting at least one of the verification protocols. For examination purpose, the Examiner select one from the alternative lists, that is the limitation “a second verification of appropriateness of an output including at least one of appropriateness of a determination that a target object of the first tracking channel is a moving object or a stationary object or appropriateness of output information of the first tracking channel” to address claim 18. However, claim 19 is drawn from one of the alternative verification protocols, that is the limitation “a third verification of tracking issue performance of the first tracking channel…” which were not selected for examination. Therefore, limitation of claim 19 does not distinguish patentability over the prior art applied).
Conclusion
28. Examiner has cited particular columns and line numbers, and/or paragraphs, and/or pages in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
29. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EYOB HAGOS whose telephone number is (571)272-3508. The examiner can normally be reached on 8:30-5:30PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Shelby Turner can be reached on 571-272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Eyob Hagos/
Primary Examiner, Art Unit 2857