Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
1. This action is in response to the applicant’s filing on April 24, 2023. Claims 1-20 are
pending.
Drawings
The drawings filed on April 24, 2023 are acceptable subject to correction of the informalities indicated below. In order to avoid abandonment of this application, correction is required in reply to the Office action. The correction will not be held in abeyance.
Informalities:
The text for the flowchart in figure 7, steps S700 – S706 is missing.
The text for the flowchart in figure 8, steps S800 – S812 is missing.
The correct labeling for these figures is found in the immediate specification.
INFORMATION ON HOW TO EFFECT DRAWING CHANGES
Replacement Drawing Sheets
Drawing changes must be made by presenting replacement sheets which incorporate the desired changes and which comply with 37 CFR 1.84. An explanation of the changes made must be presented either in the drawing amendments section, or remarks, section of the amendment paper. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). A replacement sheet must include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of the amended drawing(s) must not be labeled as “amended.” If the changes to the drawing figure(s) are not accepted by the examiner, applicant will be notified of any required corrective action in the next Office action. No further drawing submission will be required, unless applicant is notified.
Identifying indicia, if provided, should include the title of the invention, inventor’s name, and application number, or docket number (if any) if an application number has not been assigned to the application. If this information is provided, it must be placed on the front of each sheet and within the top margin.
Annotated Drawing Sheets
A marked-up copy of any amended drawing figure, including annotations indicating the changes made, may be submitted or required by the examiner. The annotated drawing sheet(s) must be clearly labeled as “Annotated Sheet” and must be presented in the amendment or remarks section that explains the change(s) to the drawings.
Timing of Corrections
Applicant is required to submit acceptable corrected drawings within the time period set in the Office action. See 37 CFR 1.85(a). Failure to take corrective action within the set period will result in ABANDONMENT of the application.
If corrected drawings are required in a Notice of Allowability (PTOL-37), the new drawings MUST be filed within the THREE MONTH shortened statutory period set for reply in the “Notice of Allowability.” Extensions of time may NOT be obtained under the provisions of 37 CFR 1.136 for filing the corrected drawings after the mailing of a Notice of Allowability.
Claim Rejections – 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 10 & 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The omitted elements are: a motor or actuator to apply the change in offset voltage to. Without this essential element no adjustment of the reference axis is possible by the application of an offset voltage.
Claim Rejections – 35 USC § 102
2. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented,
described in a printed publication, or in public use, on sale, or otherwise available to the public
before the effective filing date of the claimed invention.
3. Claims 1-6, 8-9, 11-16, & 18-19 are rejected under 35 U.S.C. 102 (a)(2) as being
unpatentable over Englard et al (US 20190180502 A1), hereinafter Englard.
4. Regarding Claims 1 & 11:
Englard teaches a reference axis adjustment controller, ([0010]: FIG. 2 is a block diagram of an example light detection and ranging (lidar) system that may be controlled using the sensor control architecture of FIG. 1).
Englard teaches obtaining an inclination angle of a slope with respect to a horizontal plane, ([0039]: As another example, the configuration of the road ahead of the vehicle may be analyzed for purposes of adjusting the field of regard of a sensor (e.g., lidar, camera, etc.). In particular, the elevation of the field of regard (e.g., the elevation of the center of the field of regard) may be adjusted based on the slope of one or more portions of the road). Englard further teaches, ([0044]: constructing a three-dimensional mesh from the point cloud, or using two-dimensional (e.g., elevation and azimuth angle) distances for thresholding and weighting of an interpolation function).
Englard teaches, instructing to adjust a reference axis of an apparatus according to the inclination angle of the slope before the apparatus enters the slope, ([0039]: For example, the field of regard may be centered such that, given the slope(s) of the road ahead and the range constraints of the sensor, visibility (i.e., sensing distance) is maximized).
5. Regarding Claim 11:
Englard teaches a processing circuit, coupled to the storage device, configured to execute instructions, ([0007]: In another embodiment, a non-transitory computer-readable medium stores thereon instructions executable by one or more processors to implement a self-driving control architecture of a vehicle. The self-driving control architecture includes a perception component configured to receive a point cloud frame generated by a sensor configured to sense an environment through which a vehicle is moving). Englard further teaches, ([0036]: As a more specific example that utilizes lidar or radar data, the perception component may include (1) a segmentation module that partitions lidar or radar point clouds devices into subsets of points that correspond to probable objects).
6. Regarding Claims 2 & 12:
Englard teaches obtaining point cloud data from a Light Detection and Ranging (LiDAR) light receiver or using LiDAR technology, ([0036]: The sensor data (and possibly other data) is processed by a perception component of the vehicle, which outputs signals indicative of the current state of the vehicle's environment. For example, the perception component may identify positions of (and possibly classify and/or track) objects within the vehicle's environment. As a more specific example that utilizes lidar or radar data, the perception component may include (1) a segmentation module that partitions lidar or radar point clouds devices into subsets of points that correspond to probable objects, (2) a classification module that determines labels/classes for the subsets of points (segmented objects), and (3) a tracking module that tracks segmented and/or classified objects over time (i.e., across subsequent point cloud frames)).
7. Regarding Claim 12:
Englard teaches instructions to obtain point cloud data. See Claim 11.
8. Regarding Claims 3 & 13:
Englard teaches analyzing point cloud data to obtain the inclination angle of the slope, ([0131]: In some embodiments, which may or may not utilize machine learning, the attention model 634 determines where and/or how to focus one or more of the sensors 602 based on a configuration of the road on which the vehicle is traveling. In particular, the road configuration (e.g., slope), and possibly the orientation of the vehicle itself (e.g., heading downhill at a certain angle), may be analyzed to ensure that one or more of the sensors 602 are focused so as to collect more useful data, without, for example, being overly focused on the road immediately ahead of the vehicle or overly focused on the sky). Englard further teaches, ([0138]: For example, the attention model 634 may process lidar, radar and/or camera data generated by the sensors 602 to determine the slope of one or more portions of the road ahead).
9. Regarding Claims 4 & 14:
Englard teaches analyzing point cloud data to identify the slope.
See Claims 3 & 13.
10. Regarding Claim 14:
Englard teaches instructions to analyze point cloud data. See Claim 11.
11. Regarding Claims 5 & 15:
Englard teaches analyzing point cloud data to determine a location of the slope, ([0138]: To determine the appropriate focus for the sensor(s), the attention model 634 may identify one or more road portions (e.g., ahead of the vehicle 712), and determine certain characteristics of the road portion(s). For example, the attention model 634 may process lidar, radar and/or camera data generated by the sensors 602 to determine the slope of one or more portions of the road ahead). Englard further teaches, ([0139]: Once the slope of one or more road portions ahead of (and possibly beneath) the vehicle 712 has/have been determined, the attention model 634 may determine a sensor direction 714 (e.g., elevation angle and possibly azimuthal angle) that satisfies one or more visibility criteria. For example, the attention model 634 may seek to maximize a sensing distance of a sensor in some direction along which the vehicle 712 is expected to travel (e.g., based on current planning from the motion planner 440 of FIG. 6, and/or based on mapping and navigation signals 432 of FIG. 6, etc.). To this end, the attention model 634 (or another unit of the sensor control component 630) may use well-known trigonometric principles/formulas to determine where a sensor field of regard would be focused for a given elevation (and possibly azimuthal) angle).
12. Regarding Claim 15:
Englard teaches instructions to analyze point cloud data.
See Claim 11.
13. Regarding Claims 6 & 16:
Englard teaches the reference axis of the apparatus is adjusted to be parallel to the inclination angle of the slope, ([0139]: Once the slope of one or more road portions ahead of (and possibly beneath) the vehicle 712 has/have been determined, the attention model 634 may determine a sensor direction 714 (e.g., elevation angle and possibly azimuthal angle) that satisfies one or more visibility criteria. For example, the attention model 634 may seek to maximize a sensing distance of a sensor in some direction along which the vehicle 712 is expected to travel). It is clear from the geometrical constraints of the visibility criteria shown above, that the axis of the sensor apparatus will be substantially parallel to the inclination angle of the slope of the road immediately prior to entering the slope, as well as reorienting the axis to remain parallel to the road surface once the vehicle has entered the identified slope, thus maximizing sensing distance. See figures 10 & 11.
14. Regarding Claims 8 & 18:
Englard teaches the reference axis adjustment controller is disposed in the apparatus inside a vehicle or communicatively coupled to the vehicle, ([0079]: In some implementations, the light source 210, the scanner 220, and the receiver 240 may be packaged together within a single housing 255, which may be a box, case, or enclosure that holds or contains all or part of the lidar system 200. The housing 255 includes a window 257 through which the beams 225 and 235 pass. The controller 250 may reside within the same housing 255 as the components 210, 220, and 240, or the controller 250 may reside outside of the housing 255. In one embodiment, for example, the controller 250 may instead reside within, or partially within, the perception component 104 of the sensor control architecture 100 shown in FIG. 1).
15. Regarding Claims 9 & 19:
Englard teaches the reference axis adjustment controller comprises a motor configured to adjust the reference axis of the apparatus according to a tilt angle control signal, ([0081]: Generally speaking, the scanner 220 steers the output beam 225 in one or more directions downrange. To accomplish this, the scanner 220 may include one or more scanning mirrors and one or more actuators driving the mirrors to rotate, tilt, pivot, or move the mirrors in an angular manner about one or more axes). Englard further teaches, [0083] The one or more scanning mirrors of the scanner 220 may be communicatively coupled to the controller 250, which
may control the scanning mirror(s) so as to guide the output beam 225 in a desired direction downrange or along a desired scan pattern.
Claim Rejections – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not
identically disclosed as set forth in section 102, if the differences between the claimed invention and
the prior art are such that the claimed invention as a whole would have been obvious before the
effective filing date of the claimed invention to a person having ordinary skill in the art to which the
claimed invention pertains. Patentability shall not be negated by the manner in which the invention was
made.
16. Claims 10 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Englard et al (US 20190180502 A1), hereinafter Englard in view of Ye et al (US 20200348402 A1), hereinafter Ye.
17. Regarding Claims 10 & 20:
Englard teaches the reference axis adjustment controller comprises
at least one mirror arranged in an array to adjust the reference axis of the apparatus, See Claims 9 & 19.
Englard does not teach changing an offset voltage of vertical scanning of the at least one mirror.
However, Ye teaches a method for adjusting sensor parameters, including LiDAR, ([0080]: In specific implementation, the parameter of a scanning apparatus of the lidar may be adjusted to control the orientation angle at the center of the field of view of the lidar system to be the desired detection angle. Optionally, the scanning apparatus is a two-dimensional galvanometer. The two-dimensional galvanometer transmits the laser pulse signal, which is transmitted by the lidar, to a two-dimensional space, and receives a laser pulse echo signal reflected from the two-dimensional space. The two-dimensional galvanometer used to control the orientation angle at the center of the field of view of the lidar to be the desired detection angle facilitates engineering implementation of an integrated and miniaturized lidar. Optionally, the scanning apparatus is two one-dimensional galvanometers that are perpendicular to each other and capable of vibrating independently. The two one-dimensional galvanometers control the scanning of the vertical field of view and the scanning of the horizontal field of view respectively. The center locations of the two galvanometers are controlled by two one-dimensional galvanometers respectively to implement two-dimensional orientation at the center of the field of view of the lidar. When the scanning apparatus is a two-dimensional galvanometer, the drive voltage or drive current of the two-dimensional galvanometer may be adjusted to control the orientation angle at the center of the field of view of the lidar system to be the desired detection angle).
It would have been obvious for one of ordinary skill in the art at the time of filing to modify Englard with Ye to include an offset voltage to adjust the reference axis since it is the same field of endeavor and results would have been predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Englard with Ye since, (Ye: [0003]: With the development of driverless technologies, a wide field of view, a high resolution, and long ranging have become main development directions of lidars in the future). Using the offset voltage to adjust the reference axis depending on the slope of the road, as described above, increases detection range.
18. Claims 7 & 17 are rejected under 35 U.S.C. 103 as being unpatentable over Englard et al (US 20190180502 A1), hereinafter Englard in view of Takada et al (WO 2023017796 A1), hereinafter Takada.
19. Regarding Claims 7 & 17:
Englard teaches the apparatus is a LiDAR apparatus, See Claims 5 & 15
Englard does not teach the apparatus is a headlamp or tail light. However, Takada teaches, ([Step SP21]: In this step, based on the information input from the LiDAR unit 5, the control unit CO measures, for example, how much the road surface changes in the vertical direction at a predetermined horizontal measurement distance in front of the vehicle VE. A slope is calculated). Takada further teaches, ([Step SP26]: in FIG. 7, the direction in which the low beam LL is emitted is indicated by a solid line, and the light distribution of the low beam LL is indicated by a solid line in FIG. As shown by solid lines in FIGS. 7 and 9, in this example, when the vehicle VE has passed the first point P1, the low beam LL is tilted upward by an angle θ1 compared to the state before that. Specifically, in this step, the controller CO controls the actuator 22 of the low beam lamp unit 20 to tilt the lamp unit main body 21 upward. At this time, the controller CO tilts the lamp unit main body 21 upward so that the angle θ1 is within the range of the rising angle θ0 or less, which is the amount of change in the gradient of the road surface. In this way, the low beam LL is tilted upward within a range equal to or less than the inclination angle θ0 before the vehicle VE reaches the first point P1, and irradiates upward. Thus, in this example, the controller CO controls the low beam lamp unit 20 to change the lower end and the upper end of the light emitted from the low beam lamp unit 20 upward by the same change amount). Figure 7 shows both the LiDAR reference axis and the headlamp reference axis adjusted based on the upcoming slope in the road.
It would have been obvious for one of ordinary skill in the art at the time of filing to modify Englard with Takada to include an adjustment of the headlamp reference axis bases on the slope of the road, since it is the same field of endeavor and results would have been predictable. One of ordinary skill in the art at the time of filing would have been motivated to modify Englard with Takada since, (Takada: [Step SP24]: By controlling the LiDAR unit 5 as described above by the control unit CO, the LiDAR unit 5 detects the distance after the vehicle VE passes through the second point, compared to the case where the vertical position of the detection range does not change. be able to. Therefore, the LiDAR unit 5 can detect an object located farther than the object detected before this step, and the driver can quickly recognize the situation in front of the vehicle VE, thereby ensuring safety). The added safety benefit extends to the headlamp adjustment for the same reasons. By adjusting the headlamp, as disclosed by Takada, a larger section of the road ahead is illuminated, providing the driver more time to react to obstacles or changes in traffic.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US 20200041650 A1: Discloses a device and method for detecting position of a reference plane or road surface using LiDAR
EP 3343097 A1: Discloses a device and method for controlling a vehicle lamp.
US 20200284883 A1: Discloses a LiDAR system and method of control.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES W NAPIER whose telephone number is (571)272-7451. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Hodge can be reached at (571) 272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.W.N./Examiner, Art Unit 3645
/HELAL A ALGAHAIM/SPE , Art Unit 3645