Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This is the first office action on the merits and is responsive to the papers filed 07/07/2023. Claims 1-20 are currently pending and examined below. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Information Disclosure Statement The information disclosure statements submitted by Applicant are in compliance with the provision of 37 CFR 1.97, 1.98 and MPEP § 609. They have been placed in the application file and the information referred to therein has been considered as to the merits. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-7, 9-10, 13-15, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shu et al. (US 2018/0299552 A1, “Shu”) in view of Ishay Sivan et al. (US 20170237897 A1, “Sivan”). Regarding claim 1, Shu teaches a Light Detection and Ranging (LiDAR) for a vehicle ([0002], [0061]), the LiDAR comprising: a transmitter configured to generate and transmit lig h ts (Shu’s Tx module 240 , emitter array 242, and emitted light/pulse trains. See Shu [0069]-[0071],); a receiver configured to receive lights reflected from an object (Shu’s Rx module 230 , sensor array 236, and reflected portions 239 detected from an object. See Shu [0069]-[0073]); and a signal processor configured to detect the object by processing the lights received by the receiver (Shu’s ASIC/controller/filtering/histogram processing detects return signals and determines range/object information. See Shu [0072]-[0075]), and generate one frame by accumulating a plurality of shots corresponding to the lights (Shu expressly defines a shot as a pulse train over a detection interval, a measurement as multiple pulse trains over multiple shots, and a histogram that accumulates counts across the shots. Shu also states the collected range data can be post-processed into one or more frames / depth images / 3D point clouds. See Shu [0049]-[0052], [0107]-[0122], [0093], [0184]-[0191].). Shu teaches a weight is assigned for each pulse train and at least two pulse trains can have different weights [0187]. However, Shu fails to explicitly teach wherein a newest shot among the plurality of shots for generating the one frame is weighted with a highest importance in the one frame generated by accumulating the plurality shots. Sivan teaches sequentially captured images/frames with weighting factors satisfying WEIGHT( i ) ≤ WEIGHT(i+1) and indicates a recent frame may be assigned a higher weight than an older frame , with older images less dominant than newer ones. ([0070]-[0074], [0192]-[0198], especially [0196]). A person of ordinary skill would have been motivated to apply Sivan’s explicit recency-weighting rule to Shu’s multi-shot LiDAR accumulation because both references combine temporally sequential captures into a single resulting metric/output . Applying greater weight to later/newer LiDAR shots would have predictably caused the accumulated frame to better represent the current scene , while reducing stale contribution from earlier shots. Regarding claim 2, Shu in view of Sivan teaches the LiDAR of claim 1, wherein an older shot among the plurality of shots for generating the one frame is weighted with a lower importance in the one frame generated by accumulating the plurality shots (Sivan, [0196]: a recent frame is assigned a higher weight than an older frame. That means the older shot has lower importance.). Regarding claim 5, Shu in view of Sivan teaches the LiDAR of claim 1, wherein the signal processor is configured to apply various weight values to the plurality of shots such that a newer shot among the plurality of shots for generating the one frame is weighted with a hig h er importance in the one frame generated by accumulating the plurality shots (As indicated in claim 1, Shu teaches a LiDAR system including a signal processor that generates one frame by accumulating a plurality of shots.. Sivan teaches combining multiple sequentially captured images using weighting factors WEIGHT( i ), where the later image may be assigned a greater weight than earlier images, e.g., WEIGHT( i )≤WEIGHT(i+1), and further teaches weighted arithmetic averaging of the sequential captures, with older images being less dominant than newer images ([0070]-[0072], [0193]-[0197].). Regarding claim 6, Shu in view of Sivan teaches the LiDAR of claim 5, wherein the signal processor is configured to apply a larger weight value to the newer shot (Sivan, [0196]). Regarding claim 7, Shu in view of Sivan teaches the LiDAR of claim 6, wherein a sum of the weight values applied to the plurality of shots is 1 (Shu teaches generating one frame by accumulating a plurality of shots. Sivan teaches combining multiple sequentially captured images using weighting factors in a weighted arithmetic mean ([0193]-[0194]). Sivan further expressly teaches a normalized implementation in which the sum of the weighting factors equals one, i.e., Σj =0 N w( i −j) =1 ([0195]). It would have been obvious to one of ordinary skill in the art to normalize the weights applied to Shu’s plurality of shots so that their sum equals 1, because normalization yields a standard weighted combination that preserves scale and provides predictable contribution of each shot to the accumulated frame.). Claims 9-10, 13 -15 , are method claims for controlling a Lidar corresponding to the Lidar of claims 1-2, 5-7. They are rejected for the same reasons. Regarding claim 17, Shu teaches a vehicle comprising a LiDAR for detecting an object ([0061]-[0069]). The rest of the claim is rejected for the same reason as claim 1. Claim 18 is rejected for the same reason as claim 5. Regarding claim 19, Shu in view of Sivan teaches the vehicle of claim 17, wherein the LiDAR is configured to detect the object located at a front side, a rear side, or a lateral side of the vehicle (Shu teaches a vehicle comprising a LiDAR for detecting an object ([0002], [0061], and [0068]). Shu further teaches that the LiDAR is configured to detect objects at different positional sides of the vehicle. In particular, Shu teaches that multiple solid-state LiDAR subsystems may face different directions to capture a composite field of view larger than any one subsystem alone ([0063]), that the system may provide a 3D image of the environment surrounding the car ([0068]), and that scanning embodiments may sample a full 360-degree region of the surrounding volume ([0093]). Shu also teaches front-scene coverage ([0064]) and blind-spot monitoring ([0068]), the latter corresponding to lateral-side object detection. ). Regarding claim 20, Shu in view of Sivan teaches the vehicle of claim 17, wherein the vehicle is an autonomous vehicle or comprises an advanced driver assistance system (ADAS) (Shu teaches that LIDAR systems are used for vehicles such as cars and trucks ([0002]), and further teaches that in a fully autonomous vehicle , the LIDAR system can provide a real time 3D image of the environment surrounding the car to aid in navigation ([0068]). Shu also teaches that the LIDAR system may be employed as part of an advanced driver-assistance system (ADAS) or safety system, including adaptive cruise control, automatic parking, blind spot monitoring, and collision avoidance systems ([0068]).). Claims 3, 4, 11, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Shu in view of Sivan and further in view of Hall et al. (US20170219695 A1, “Hall”). Regarding claim 3, Shu in view of Sivan fails to explicitly teach teaches the LiDAR of claim 2, wherein the transmitter is configured to vary power of the lights for generating the plurality of shots such that a newer shot among the plurality of shots for generating the one frame is weighted with a higher importance in the one frame generated by accumulating the plurality shots. Shu teaches a LiDAR system that performs a measurement using a plurality of pulse trains/shots and accumulates weighted contributions from different shots into an accumulated result/frame ([0049]-[0050], [0155]-[0160]). Shu fails to teach vary power of the lights for generating the plurality of shots. However, Hall teaches a transmitter arrangement in which emitted pulses are generated by selective discharge of multiple energy storage elements, such that the emitted pulses can vary in magnitude and duration ([0056]-[0059]). Hall further teaches an example in which later pulses in the sequence are emitted with greater amplitude/power than earlier pulses, e.g., four relatively small pulses followed by a fifth pulse having relatively large amplitude and long duration (Hall [0060]). Thus, Hall teaches varying the power of the emitted light so that a later/newer pulse is stronger than earlier pulses. It would have been obvious to apply Hall’s known variable-power pulse transmission to Shu’s multi-shot accumulated LiDAR measurement so that a newer shot has a greater contribution, i.e., higher importance, in the accumulated frame, since doing so predictably biases the accumulated result toward more recent measurements while also improving detectability/ranging performance for selected later shots. Regarding claim 4, Shu in view of Sivan and Hall teaches the LiDAR of claim 3, wherein the transmitter is configured to transmit a light for the newer shot with larger power. Shu teaches a LiDAR system employing multiple temporally distinct shots/pulse trains for ranging/object detection (Shu, [0049]-[0050], [0107]-[0120]). Hall teaches a LiDAR transmitter having a pulsed light emitting device whose emitted pulses can vary in magnitude and duration (Hall, [0057]-[0059]). Hall further teaches transmitting later light in the sequence with larger power, e.g., a measurement pulse sequence having four relatively small amplitude pulses followed by a fifth pulse having relatively large amplitude and long duration (Hall [0060]). Thus, Hall teaches the transmitter being configured to transmit light for a newer/later shot with larger power. It would have been obvious to further modify Shu’s multi-shot LiDAR system with Hall’s known variable-power pulse transmission so that a newer shot is transmitted with larger power, because Hall teaches that varying pulse amplitude within a sequence improves performance for different ranging conditions, including allowing stronger later pulses for longer-range or more robust detection, while Shu already teaches temporally ordered shots whose accumulated results are used for object detection and ranging. Claims 11-12 , are method claims for controlling a Lidar corresponding to the Lidar claims 3-4. They are rejected for the same reasons. Claims 8, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Shu in view of Sivan and Zhou et al. (US 20060139494 A1, “Zhou”). Regarding claim 8, Shu in view of Sivan fails to explicitly teach the LiDAR of claim 1, wherein, when the object is a moving, the newest shot among the plurality of shots for generating the one frame is weighted with the highest importance in the one frame generated by accumulating the plurality shots. Shu teaches a LiDAR system for a vehicle using multiple temporally distinct shots, where a “shot” is the emission and detection of a pulse train during a detection interval, and multiple shots are accumulated to generate measurement/frame data (Shu [0049]-[0050], [0092]-[0093], [0115]-[0122]). However, Shu fails to explicitly teach that when the object is moving, the newest shot is weighted with the highest importance in the accumulated frame/result Zhou teaches motion-adaptive temporal weighting in which motion between current and previous data is detected, and when the object/pixel is in a motion region, the current/newest data is retained while the previous data is suppressed to avoid motion blurring ([0010], [0020], [0032]-[0043]). In particular, Zhou also teaches that in a motion region the filtered pixel equals the original current value, i.e., weights 1 and 0 are assigned to the current and previous data, respectively ([0041]). It would have been obvious to modify Shu’s multi-shot LiDAR accumulation technique with Zhou’s motion-adaptive weighting because Zhou teaches that, when motion is present, giving highest importance to the newest/current observation avoids motion blur and stale-data artifacts. Applying that known principle to Shu’s temporally accumulated LiDAR shots would have predictably improved moving-object detection accuracy by reducing the contribution of older shots that no longer accurately represent the moving object. Claim 16 is method claim for controlling a Lidar corresponding to the Lidar claim 8. It is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Murao et al. (US 20150086079 A1), teaches Vehicle control system and image sensor Yamada et al. (US 20110234802 A1), teaches On-vehicle lighting apparatus Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEMPSON NOEL whose telephone number is (571) 272-3376. The examiner can normally be reached on Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached on (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEMPSON NOEL/ Examiner, Art Unit 3645 /YUQING XIAO/ Supervisory Patent Examiner, Art Unit 3645