DETAILED ACTION
This Non-Final Office Action is in response to claims filed 11/1/2024.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
Key to Interpreting this Office Action
For readability, all claim language has been underlined.
Citations from prior art are provided at the end of each limitation in parentheses.
Any further explanations that were deemed necessary by the Examiner are provided at the end of each claim limitation.
The Applicant is encouraged to contact the Examiner directly if there are any questions or concerns regarding the current Office Action.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 4-6, 13, and 15-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation of deleting the target dynamic fusion track, based on the target dynamic fusion track being obtained via the LiDAR only, based on only a LiDAR track in the specific frame (emphasis added). However, claim 2 further limits the “deleting” step as being based on at least one of:
the error not having occurred in the radar and the camera in the specific frame,
the object type of the target dynamic fusion track being an automobile,
the object type of the target dynamic fusion track being an unknown type,
the target dynamic fusion track obtained in the previous frame being not obtained via the radar track in the previous frame, or
the target dynamic fusion track in the previous frame being not obtained via the camera track in the previous frame, such that the “deleting” step is not determined only a LIDAR track in the specific frame.
One of ordinary skill in the art cannot reasonably interpret this limitation. Claim 13 is rejected under 35 U.S.C. 112(b) for similar reasons.
Claim 4 recites the limitation of the specific time in the seventh line of claim 4. There is insufficient antecedent basis for this limitation in the claim. It is unclear if this limitation is intended to reference the “specified time” in claim 1, with respect to one of the alternative limitations of “obtained via the camera within a specified time before the specific frame.” Claim 15 is rejected under 35 U.S.C. 112(b) for similar reasons.
Claim 4 further recites the limitation of deleting the target dynamic fusion track, based on the target dynamic fusion track being obtained only via the radar track in the specific time, while claim 1 recites delete…a target dynamic fusion track…, based on the target dynamic fusion track being obtained via only the LIDAR (emphasis added). The limitation of “the target dynamic fusion track” cannot simultaneously be obtained “only via the radar track” and “only via the LIDAR.” Claim 15 is rejected under 35 U.S.C. 112(b) for similar reasons.
Claim 5 recites the limitation of deleting the target dynamic fusion track, based on the target dynamic fusion track being obtained only via the radar track in the specific frame, while claim 1 recites delete…a target dynamic fusion track…, based on the target dynamic fusion track being obtained via only the LIDAR (emphasis added). The limitation of “the target dynamic fusion track” cannot simultaneously be obtained “only via the radar track” and “only via the LIDAR.” Claim 16 is rejected under 35 U.S.C. 112(b) for similar reasons.
Claims 6 and 17 are rejected under 35 U.S.C. 112(b) for incorporating the errors of claims 4 and 15 by dependency.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 7, 8, 12-16, 18, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ma et al. (US 2023/0011829 A1), hereinafter Ma.
Claim 1
Ma discloses the claimed vehicle control apparatus (see ¶0071, with respect to Figure 5, regarding that the object-matching system 200 of Figure 2 is implemented as computing system 500, such that the matched object is sent to the autonomous-driving system 100 of Figure 1, as described in ¶0069, for control of operations of the autonomous vehicle, as described in ¶0037) comprising:
a light detection and ranging device (LiDAR) (i.e. LiDAR sensors 210, described in ¶0043, with respect to Figure 2);
a camera (i.e. camera sensors 212, described in ¶0043, with respect to Figure 2);
a radar (i.e. radar sensors 214, described in ¶0043, with respect to Figure 2);
one or more processors (i.e. processor 510, described in ¶0071, with respect to Figure 5); and
memory storing instructions (see ¶0074, regarding that program instructions are loaded into memory 520, such that processor 510 may execute the program instructions associated with method 400 of Figure 4) that, when executed by the one or more processors, cause the vehicle control apparatus to:
obtain, via at least one of the LiDAR, the camera, or the radar, a plurality of dynamic fusion tracks (i.e. representations on objects) in a specific frame, the plurality of dynamic fusion tracks corresponding to a plurality of external objects classified into objects capable of being in a movement state (see ¶0043-0044, with respect to Figure 2, regarding that detection and segmentation module 220 detects one or more objects included in the raw pointclouds captured by LiDAR sensors 210, image data captured by camera sensors 212, and radar data captured by radar sensors 214, where the objects are categorized as persistent track 242, new track 244, or lost track 246, as described in ¶0047, and may pertain to a pedestrian, bicycle, motorcycle, etc., as described in ¶0053);
delete, among the plurality of dynamic fusion tracks, a target dynamic fusion track (e.g., LiDAR object 335 in example 330 of Figure 3) corresponding to a target object among the plurality of external objects in the specific frame, based on the target dynamic fusion track being obtained via only the LiDAR among the LiDAR, the camera, and the radar (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0058, with respect to the first example 330 in Figure 3, regarding that the ghost LiDAR object 335 is discarded/treated as a lost track, while image objects 336-338 and matching LiDAR objects 331, 333, and 334 are categorized as persistent track), and further based on at least one of:
whether an error has occurred in the radar and the camera in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043), or
an object type of the target dynamic fusion track (see ¶0047, regarding that objects categorized as lost track 246 are excluded, as opposed to objects categorized as persistent track 242 and new track 244); and
control a vehicle, based on at least one remaining dynamic fusion track of the plurality of dynamic fusion tracks in the specific frame after the deletion of the target dynamic fusion track (see ¶0067-0069, with respect to Figure 4, regarding that a matched object is determined in response to labels assigned to objects detected in different sensor data indicating a match, such that the matched object is sent to the autonomous driving system 100 for controlling operations of the autonomous vehicle, as described in ¶0037; ¶0047, regarding that the persistent track label describes matched objects, and the lost track label indicates that the object is excluded from further data processing).
While Ma has been applied to both of the limitations of “whether an error has occurred in the radar and the camera in the specific frame” and “an object type of the target dynamic fusion track,” only one of these limitations is required to be taught by prior art.
Claims 2 and 13
Ma further discloses that the instructions, when executed by the one or more processors, cause the vehicle control apparatus to delete the target dynamic fusion track by:
deleting the target dynamic fusion track, based on the target dynamic fusion track being obtained via the LiDAR only, based on only a LiDAR track in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0058, with respect to the first example 330 in Figure 3, regarding that the ghost LiDAR object 335 is discarded/treated as a lost track, while image objects 336-338 and matching LiDAR objects 331, 333, and 334 are categorized as persistent track), and based on at least one of:
the error not having occurred in the radar and the camera in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0058, with respect to the first example 330 in Figure 3, regarding that the ghost LiDAR object 335 is discarded/treated as a lost track), or
the object type of the target dynamic fusion track being an unknown type (see ¶0047, regarding that objects categorized as lost track 246 are excluded, as opposed to objects categorized as persistent track 242 and new track 244).
While Ma has been applied to both of the limitations of “the error not having occurred in the radar and the camera in the specific frame” and “the object type of the target dynamic fusion track being an unknown type,” only one of these limitations is required to be taught by prior art.
In view of the indefiniteness of the limitations discussed in the rejections of claims 2 and 13 under 35 U.S.C. 112(b), the examiner has accorded the limitations of claims 2 and 13 their broadest reasonable interpretation.
Claims 3 and 14
Ma further discloses that the instructions, when executed by the one or more processors, further cause the vehicle control apparatus to:
obtain a fusion track in the specific frame, based on at least one of the camera track in the specific frame, or the radar track in the specific frame (see ¶0066, with respect to step 408 of Figure 4, regarding that a second object included in the second sensor data is obtained, where the second sensor system may be a camera system or radar sensor system, as described in ¶0064); and
obtain the target dynamic fusion track, based on at least one of a LiDAR track in the specific frame or the fusion track (see ¶0065, with respect to step 406 of Figure 4, regarding that a first object included in the first sensor data is obtained, where the first sensor system may be a LiDAR sensor system, as described in ¶0064).
Claims 4 and 15
Ma further discloses that the instructions, when executed by the one or more processors, cause the vehicle control apparatus to delete the target dynamic fusion track by:
deleting the target dynamic fusion track, based on the target dynamic fusion track being obtained only via the radar track in the specific time (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0059, with respect to the fifth and sixth examples 370, 380 in Figure 3, regarding that the ghost image object 339 is discarded/treated as a lost track, where camera sensor may alternatively be a radar sensor system, as described in ¶0064), and based on at least one of:
the target dynamic fusion track being obtained based on only the fusion track obtained in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200), or
a second dynamic fusion track different from the target dynamic fusion track being obtained via the LiDAR track in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0059, with respect to the fifth and sixth examples 370, 380 in Figure 3, regarding that the ghost image object 339 is discarded/treated as a lost track while matching LiDAR objects 331, 333, and 334 are categorized as persistent or new tracks).
While Ma has been applied to both of the limitations of “the target dynamic fusion track being obtained based on only the fusion track obtained in the specific frame” and “a second dynamic fusion track different from the target dynamic fusion track being obtained via the LiDAR track in the specific frame,” only one of these limitations is required to be taught by prior art.
In view of the indefiniteness of the limitations discussed in the rejections of claims 4 and 15 under 35 U.S.C. 112(b), the examiner has accorded the limitations of claims 4 and 15 their broadest reasonable interpretation.
Claims 5 and 16
Ma further discloses that the instructions, when executed by the one or more processors, cause the vehicle control apparatus to delete the target dynamic fusion track by:
deleting the target dynamic fusion track, based on the target dynamic fusion track being obtained only via the radar track in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0059, with respect to the fifth and sixth examples 370, 380 in Figure 3, regarding that the ghost image object 339 is discarded/treated as a lost track, where camera sensor may alternatively be a radar sensor system, as described in ¶0064), and based on at least one of:
the target dynamic fusion track being obtained based on only the fusion track obtained in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200),
a second fusion track different from the fusion track in the specific frame being obtained via the camera track (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043),
a third fusion track different from the fusion track being obtained via the camera track in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043; ¶0057-0059, with respect to the fifth and sixth examples 370, 380 in Figure 3, regarding that the ghost image object 339 is discarded/treated as a lost track while matching LiDAR objects 331, 333, and 334 are categorized as persistent or new tracks), or
a second dynamic fusion track different from the target dynamic fusion track being obtained via the camera track in the specific frame (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from LiDAR sensors 210, camera sensors 212, and radar sensors 214, as described in ¶0043).
While Ma has been applied to all of the limitations above, only one of these limitations is required to be taught by prior art.
In view of the indefiniteness of the limitations discussed in the rejections of claims 5 and 16 under 35 U.S.C. 112(b), the examiner has accorded the limitations of claims 5 and 16 their broadest reasonable interpretation.
Claims 7 and 18
Ma further discloses that the instructions, when executed by the one or more processors, further cause the vehicle control apparatus to:
classify the target object corresponding to the deleted target dynamic fusion track as an object incapable of being in the movement state (see ¶0047, regarding that an object is categorized as lost track 246 in situations in which the object is only detected by one sensor channel of the multiple sensor channels, defined in ¶0043).
The lost track of Ma pertains to “ghost objects,” as described in ¶0057 as false representations of an object and thus may be reasonably interpreted as “incapable of being in the movement state.”
Claims 8 and 19
Ma further discloses that each of the plurality of dynamic fusion tracks correspond to at least one of a pedestrian, an automobile, a two-wheeled vehicle, or a bicycle (see ¶0053, regarding that the detected objects may be a bicycle, motorcycle, or pedestrian).
Claim 12
Ma discloses the claimed vehicle control method performed by one or more processors, as discussed in the rejection of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ma in view of Nguyen et al. (US 2016/0180177 A1), hereinafter Nguyen.
Claims 6 and 17
Ma does not further disclose that the instructions, when executed by the one or more processors, further cause the vehicle control apparatus to:
determine, in a lane in which the vehicle is located, a point, at which a difference between a longitudinal position of the vehicle and a longitudinal position of the point is less than a specified distance, to be included in the specified area.
However, this limitation does not influence the overall method, and therefore, it would be obvious to incorporate particular ranges in which data points may be acquired, in light of Nguyen.
Specifically, Nguyen teaches the known technique of determin[ing], in a lane in which vehicle 15 (similar to the vehicle taught by Ma) is located a point, at which a difference between a longitudinal position of vehicle 15 and a longitudinal position of the point is less than a specified distance, to be included in the dynamic detection range (similar to the specified area taught by Ma) (see ¶0032-0035, with respect to Figure 3, regarding that false positives 115, 116 detected from captured frames of image data 92 are removed, due to lying outside of the dynamic detection range of lidar detector 50, where pavement markers 35, 40 are detected in a “longitudinal position” with respect to the “longitudinal position” of vehicle 15, as depicted in Figure 1). Similar to Ma, Nguyen teaches multiple sensors used in combination to detect objects in the vicinity of a vehicle (see ¶0022, regarding the use of camera 45 and lidar detector 50).
Since the systems of Ma and Nguyen are directed to the same purpose, i.e. removing false vehicle sensor data, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Ma to further determine, in a lane in which the vehicle is located, a point, at which a difference between a longitudinal position of the vehicle and a longitudinal position of the point is less than a specified distance, to be included in the specified area, in light of Nguyen, with the predictable result of further reducing false positives due to reflections from headlights and guard rails (¶0030 of Nguyen), similar to the “ghost objects” of Ma.
Claims 9-11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ma in view of Yun et al. (US 2023/0090259 A1), hereinafter Yun.
Claims 9 and 20
Ma does not further disclose:
a near vehicle detector (NVD) camera, different from the camera, configured to capture one of a front of the vehicle or a rear of the vehicle; and
a rear side view (RSIR) camera, different from the camera, configured to capture a rear corner of the vehicle.
However, Ma discloses that one or more camera sensors may be used in ¶0043; therefore, it would be obvious to incorporate NVD and RSIR cameras, in light of Yun.
Specifically, Yun teaches a near vehicle detector (NVD) (i.e. NVD unit 130), and a rear side view (RSIR) camera (i.e. RSIR 140) (see ¶0063, regarding sensing device 100 includes one or more sensors for obtaining information on a target object located in the vicinity of the vehicle, where the sensors include a near vehicle detection (NVD) unit 130 and rear side view camera (RSIR) 140, as described in ¶0080), where additional sensors include a camera (similar to the camera taught by Ma) and thus, the NVD and RSIR are different from the camera described in ¶0063. The rear side view camera (RSIR) taught by Yun is inherently configured to capture a rear corner of the vehicle, in light of its description as being “rear side view,” and thus, the near vehicle detection (NVD) unit of Yun may be inherently configured to capture a rear of the vehicle, given that the NVD unit 130 and RSIR 140 obtain tracks of the same object at the same time (see ¶0081).
Since the systems of Ma and Yun are directed to the same purpose, i.e. determining if detections from different sensors represent the same physical object, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ma to further include a near vehicle detector (NVD) camera, different from the camera, configured to capture one of a front of the vehicle or a rear of the vehicle, and a rear side view (RSIR) camera, different from the camera, configured to capture a rear corner of the vehicle, in light of Yun, with the predictable result of incorporating additional sensors for improved reliability and accuracy in various situations (¶0007 of Yun), where the sensors have similar characteristics that allow for simplified data conversion and improved sensor fusion (¶0080 of Yun).
Ma, as modified by Yun, further discloses that the instructions, when executed by the one or more processors, further cause the vehicle control apparatus to obtain the plurality of dynamic fusion tracks by:
obtaining the plurality of dynamic fusion tracks via at least one of the LiDAR, the camera, the radar, the NVD camera, or the RSIR camera (see ¶0043-0044, with respect to Figure 2, regarding that detection and segmentation module 220 detects one or more objects included in the raw pointclouds captured by LiDAR sensors 210, image data captured by camera sensors 212, and radar data captured by radar sensors 214, where the objects are categorized as persistent track 242, new track 244, or lost track 246, as described in ¶0047, and may pertain to a pedestrian, bicycle, motorcycle, etc., as described in ¶0053). Only one of the LiDAR, camera, radar, NVD camera, or RSIR camera is required to be taught by prior art.
Claim 10
Ma, as modified by Yun to incorporate NVD and RSIR sensors, further discloses that the instructions, when executed by the one or more processors, further cause the vehicle control apparatus to delete the target dynamic fusion track by:
deleting the target dynamic fusion track based on the error not having occurred in the radar, the camera, the NVD camera, and the RSIR camera (see ¶0047, regarding that in situations in which the object is only detected by one sensor channel of the multiple sensor channels, the object is categorized as a lost track 246 and excluded from further data processing by the object-matching system 200, where the sensor channels include data from radar sensors, LiDAR sensors, and one or more cameras, as described in ¶0043; ¶0057-0058, with respect to the first example 330 in Figure 3, regarding that the ghost LiDAR object 335 is discarded/treated as a lost track).
Claim 11
Ma further discloses that the instructions, when executed by the one or more processors, cause the vehicle control apparatus to obtain the plurality of dynamic fusion tracks (see ¶0043-0044, with respect to Figure 2, regarding that detection and segmentation module 220 detects one or more objects included in the raw pointclouds captured by LiDAR sensors 210, image data captured by camera sensors 212, and radar data captured by radar sensors 214), such that Ma is modified by Yun to teach additional cameras, as supported in ¶0043 of Ma.
Specifically, Yun teaches a technique of obtaining tracks (similar to the step of “obtain the plurality of dynamic fusion tracks” taught by Ma) by:
obtaining, via the NVD camera, an NVD track corresponding to a target object (similar to the target object taught by Ma) (see ¶0081, regarding tracks are obtained from the NVD unit 130, where the track is of a target object, as described in ¶0063-0064);
obtaining, via the RSIR camera, an RSIR track corresponding to a target object (similar to the target object taught by Ma) (see ¶0081, regarding tracks are obtained from the RSIR 140, where the track is of a target object, as described in ¶0063-0064).
Yun further teaches obtaining a fusion track, based on at least one of the NVD track, or the RSIR track (see ¶0081, regarding a fusion sensor track is output based on the tracks obtained from lidar unit 120, NVD unit 130, and RSIR 140). Given that the NVD track and RSIR track are obtained at the same time for sensor fusion, these tracks may be reasonably interpreted as being associated with the same “frame.”
Ma further discloses obtaining the plurality of dynamic fusion tracks based on a LiDAR track in the specific frame (see ¶0043-0044, with respect to Figure 2, regarding that detection and segmentation module 220 detects one or more objects included in the raw pointclouds captured by LiDAR sensors 210, image data captured by camera sensors 212, and radar data captured by radar sensors 214).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Portnoy et al. (US 2022/0414923 A1) teaches removing dynamic objects from multiple lidar scans before aggregating the point-cloud onto the camera image (see ¶0005), Kupershtein et al. (US 2024/0053438 A1) teaches removing ghost objects from the LIDAR measurement data (see ¶0134), where different types of sensors are used to generate the sensor signal output (see ¶0149), and Keilaf et al. (US 2025/0299441 A1) teaches determining a “ghost” detection by comparing the point cloud data of a LIDAR system and an ambient light image from a camera (see ¶0185).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sara J Lewandroski whose telephone number is (571)270-7766. The examiner can normally be reached Monday-Friday, 9 am-5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya P Burgess can be reached at (571)272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARA J LEWANDROSKI/Examiner, Art Unit 3661