Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action is responsive to the amendment filed 12/1/2025. As directed by the amendment: claims 1-3, 5-7, and 9-20 are amended; claim 21 is newly added; and claims 4 and 8 are cancelled. Thus, claims 1-3, 5-7, and 9-21 are currently pending in this application.
Claim Objections
Claim 14 is objected to because of the following informalities: the last line of the claim recites the limitation of “a height above a ground.” However, since claim 1, from which claim 14 depends, has been amended to include the limitation of “a ground”, there is already antecedent basis for this limitation. This should be corrected to recite -- a height above the ground--. Appropriate correction is required.
Information Disclosure Statement
The Information Disclosure Statement submitted on 10/6/2025 is in compliance with the provisions of 37 CFR 1.97 and 1.98 and has been considered.
Applicant requests clarification on the examiner’s statement regarding the IDS filed June 19, 2023. In the IDS filed June 19, 2023, the applicant lists seven (7) foreign documents. For documents 1-3 and 5-7, applicant claims that these documents correspond to US Publication numbers. Specifically, in the IDS submitted by the applicant:
Foreign document DE 102007023888 A1 “corresponds to US Publication No. 2008019567 A1”
Foreign document DE 102018101846 A1 “corresponds to US Publication No. 2019235085 A1”
Foreign document DE 102019213515 A1 “corresponds to US Publication No. 2020103531 A1”
Foreign document JP 2008-26997 A “corresponds to US Publication No. 2008019567 A1”
Foreign document JP 2019-164121 A “corresponds to US Publication No. 2019235085 A1”
Foreign document JP 2019-51971 A “corresponds to US Publication No. 2020103531 A1”
Even though none of these documents had an English language translation, the examiner attempted to locate these corresponding US publication numbers to consider the references anyway, out of respect for applicant’s time. This is why a statement was made regarding the US Publication numbers.
Response to Amendment
Applicant’s amendments, filed 12/1/2025, have been fully considered.
The objections to claims 1-20 have been overcome by the amendments made to claims 1, 2, 4, 7, 12-16, 19, and 20, made in response to these objections.
The rejections of claims 1-19, made under 35 U.S.C. 112(b) have been overcome by the amendments made to claims 1, 15, and 16.
The amendments and arguments made in response to the claim rejections under 35 U.S.C. 102(a)(1) and 102(a)(2) have been fully considered, but are not convincing.
On page 13, applicant argues that even though the Eichenholz reference discloses the analysis of the entire point cloud, the reference somehow fails to teach the limitation where each layer is analyzed. Section 2111.01.II of the MPEP states: "Though understanding the claim language may be aided by explanations contained in the written description, it is important not to import into a claim, limitations that are not part of the claim. For example, a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment." A person of ordinary skill in the art of lidar technologies would come to the conclusion that if an entire point cloud is analyzed, that each of the individual scanning layers, that makes up the point cloud, is inherently analyzed as well. If an entire group of data is analyzed, that means that any individual sub-section of the entire data group has inherently been analyzed as well.
Even in view of the amendment, claim 1 still does not recite any further limitations, such as specific algorithms or methods for analyzing individual layers in a way that is patentably distinct from the prior art reference. MPEP section 2141.02.V states “In delineating the invention as a whole, we look not only to the subject matter which is literally recited in the claim in question... but also to those properties of the subject matter which are inherent in the subject matter and are disclosed in the specification.” Amended claim 1 simply requires that the scanning layers are “individually evaluated.” As illustrated by Eichenholz, in Figs. 19 and 20 for example, modifications to the scan pattern are made in a layer-to-layer fashion, meaning that an individual evaluation of each scan line or each scan layer, is consistent with Eichenholz’s disclosure. Therefore, it is maintained that the Eichenholz reference inherently discloses the evaluation of every individual scan layer.
On page 14, applicant further argues that the prior art reference does not teach an individual evaluation based on scanning data, as well as a common evaluation that does not depend on measured data, but instead depends on the result of the aforementioned individual evaluation. This two-step process includes (1) first analyzing the measured data to determine the presence of an object and (2) subsequently making a “common evaluation” based on the determination of whether an object was present. Section 2111 of the MPEP states that “During patent examination, the pending claims must be "given their broadest reasonable interpretation consistent with the specification."” Consistent with broadest reasonable interpretation, “a common evaluation” of the safety relevant object could be the determination that a vehicle ahead is slowing down or stopping. Furthermore, if each pixel of the image is directly mapped to a point within the field of regard where a measurement was taken, the layer of each pixel is known. In other words, by identifying the location of an object in the field of view, the scanning layer in which it is located is inherently known. Therefore, Eichenholz does in fact teach the evaluation of each individual scanning layer, as well as a common evaluation where it is determined that the measurement is indeed indicative of an obstruction and that a remedial action should take place.
Additionally, on page 14, applicant seems to claim that the Eichenholz reference is incapable of this two-step evaluation process because of an alleged “all at once” evaluation of data. MPEP section 2151.I states that “Arguments presented by applicant cannot take the place of evidence in the record.” Eichenholz teaches a system capable of a high frame rate, which applicant has pointed out on page 12. However, a person of ordinary skill in the art would conclude that having a frame rate of 10 frames per second does not inherently disable the lidar system from processing data in a two-step evaluation process. Furthermore, Eichenholz actually does disclose a two-step process of (1) analyzing received data to identify targets and their locations, and (2) making a decision or evaluation based on the identified target. Evidence to support this can be found in paragraph [0134] of the Eichenhoz reference, which describes a two-step process of (1) analyzing point cloud data to identify an object, like a stop light for example, and (2) making an evaluation based on the object, like determining the vehicle must stop because of the stoplight.
On page 18, applicant argues against the claim rejection made under 35 U.S.C. 103, by arguing against the single Yeruhami reference, asserting that the Yeruhami reference fails to teach the limitation of scanning layers. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Furthermore, applicant’s claim that the Yeruhami reference does not teach the limitation of “multiple scanning layers” is simply false. Evidence can be found in Figs. 2A and 2C, which illustrate an embodiment where the field of view is scanned in individual horizontal scanning lines, which form scanning layers.
On pages 18-19, applicant argues that using scanning planes would “make it impossible” for Eichenholz to generate a point cloud, and would be “destroying the functionality of Eichenholz.” Section 2144.01 of the MPEP states: "[I]n considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom." People of ordinary skill in the art of lidar technology share a common understanding that a point cloud is a representation of collected data. In fact, it seems like the applicant and the examiner share a common understanding of what a point cloud fundamentally is. On page 18, applicant states that a point cloud is “stitched” together by received signals, demonstrating the shared understanding that a point cloud is one way to represent received data. However, applicant fails to provide evidence for why scanning in multiple planes would make it impossible to compile data as a point cloud. Applicant also fails to provide evidence for how the use of “scanning planes” could make it impossible to compile and represent data in a point cloud. Therefore, applicant’s arguments are unpersuasive.
Furthermore, on page 18, applicant states that the multiple scanning planes could not be imported into the Eichenholz reference. Regardless of if this is factually true or not, the Middleberg reference was never relied upon to teach the limitation of “multiple scanning planes” or “multiple scanning layers.” The Middleberg reference was relied upon to teach the limitation of dividing the field of view into different ranges. Similarly, the Plasberg reference was also not relied upon to teach the limitation of “multiple scanning layers” since this was already disclosed by the Eichenholz reference.
On page 19, applicant argues that the Gimpel and Hughes references are both silent with regard to multiple scanning layers. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5, 9, 10, 12, and 17-20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Eichenholz (US 20200025923 A1).
Regarding Claim 1: Eichenholz discloses an optoelectronic sensor for detection of objects in a monitored zone (Fig. 7, lidar system 100 for detecting objects in the field of regard, referred to as FOR) comprising:
at least one light transmitter for transmitting a plurality of mutually separated light beams (Fig. 1, light source 110 in lidar system 100; Fig. 7, with beams 250A through 250N; [0112] “lidar system 100 may angularly separate the beams 250A, 250B, 250C, . . . 250N”);
at least one light receiver for generating a respective received signal from the light beams remitted in the monitored zone (Fig. 1, receiver 140; [0113] “each of the linear scan patterns 254A-N includes pixels associated with one or more laser pulses and distance measurements”);
a movable deflection unit (Fig. 1, scanner 120) for periodically guiding the transmitted light beams through the monitored zone to respectively scan a scanning layer during movement of the movable deflection unit by the separated light beams (Fig. 7, scanning patterns 252A through 252N, each of which are a layer; [0112] “the lidar system generates output beams 250A, 250B, 250C, . . . 250N etc., each of which follows a linear scan pattern 254A, 254B, 254C, . . . 254N”); and
a control and evaluation unit that is configured to acquire information on the objects in the monitored zone from the respective received signal (Fig. 1, controller 150),
wherein the control and evaluation unit is further configured to determine a presence of a safety relevant object per scanning layer, such that each of the scanning layers is individually evaluated to determine the presence of the safety relevant object therein (Figs. 19 and 20, scan patterns 700 and 710 are measured in rows 702, 704, 712, and 714. The rows, or layers, representative of a ground are scanned more densely, as illustrated by rows 704 and 714. The field of view is grouped into different rows, or in other words: scanning layers, and the decision to scan more densely is applied to the individual layers. [0177-0178]; [0081] each point or pixel in the image is directly mapped to a point within the field of regard where a measurement was taken. If an object position is identified, the scanning layer in which that obstruction is located is also inherently known, since it can be mapped to a pixel with a particular row and column), the control and evaluation unit is further configured to decide whether a safety directed response is triggered by a common evaluation of the presence of the safety relevant object determined per scanning layer ([0134] vehicle controller receives data and identifies targets with locations, distances, speeds, shapes, etc. Based on this analyzed data and identified objects, a common evaluation is made as to how the vehicle should proceed. When an intersection is identified, it can be determined that this is the appropriate place for a turn. Likewise, when a stoplight is identified, it is determined that the vehicle must come to a stop),
wherein the safety directed response is triggered on detection of a presence of the safety relevant object in a plurality of the scanning layers ([0122] “if the lidar system 100 detects a vehicle ahead that is slowing down or stopping, the autonomous-vehicle driving system may send instructions to release the accelerator and apply the brakes”; Referring to Fig. 7, [0112] “the separation between beams 250A and 250B at a certain distance may be 30 cm, and the separation between the same beams 250A and 250B at a longer distance may be 50 cm”; it is understood that a vehicle, such as the vehicle 606 shown in Fig. 15, is much larger than 30 or 50 cm, so the vehicle 606 must have been detected in a plurality of scanning layers), and
wherein the safety directed response is further triggered on detection of the presence of the safety relevant object in only a bottommost scanning layer above a ground ([0134] vehicle controller receives data and identifies targets with locations, distances, speeds, shapes, etc. Based on this analyzed data and identified objects, a common evaluation is made as to how the vehicle should proceed. This means objects that are small or that may be at the periphery of the field of view are still identified). In a case where the object is only detected in the bottommost layer, a person of ordinary skill in the art would conclude that the device disclosed by Eichenholz would still respond in the same way, and if this detection is determined to be a hazard, a safety directed response would still be triggered.
Regarding Claim 2: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the optoelectronic sensor is a laser scanner (Fig. 1, lidar system 100 with scanner 120; [0036] “The lidar system 100 may be referred to as a laser ranging system, a laser radar system, a LIDAR system, a lidar sensor, or a laser detection and ranging (LADAR or ladar) system”).
Regarding Claim 3: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the control and evaluation unit is further configured to measure a distance by means of a time-of-flight process using the respective received signal ([0046] “the controller 150 may analyze the time of flight or phase modulation for the beam of light 125 transmitted by the light source 110. If the lidar system 100 measures a time of flight of T (e.g., T represents a round-trip time of flight for an emitted pulse of light to travel from the lidar system 100 to the target 130 and back to the lidar system 100), then the distance D from the target 130 to the lidar system 100 may be expressed as
D
=
c
×
T
/
2
, where c is the speed of light (approximately
3.0
×
10
8
m/s)”).
Regarding Claim 5: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the control and evaluation unit is further configured to determine the presence of the safety relevant object in the plurality of scanning layers when the presence in the scanning layers is determined in the same or adjacent angular positions (Fig. 15, presence of vehicle 606 is identified and differentiated from the ground portion 602, all within field of regard 600; Fig. 20 shows that for the scan lines 714A-C which are in the ground portion are adjacent, illustrating a scan pattern of the lidar system. It is understood that the presence of vehicle 606 must have been detected in adjacent angular positions in the plurality of scanning layers because the vehicle 606 is much larger than the resolution of the scan).
Regarding Claim 9: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the control and evaluation unit is further configured to trigger the safety directed response (0122] “if the lidar system 100 detects a vehicle ahead that is slowing down or stopping, the autonomous-vehicle driving system may send instructions to release the accelerator and apply the brakes”) on detection of the presence of the safety relevant object in at least two or more scanning layers ([0112] “the separation between beams 250A and 250B at a certain distance may be 30 cm, and the separation between the same beams 250A and 250B at a longer distance may be 50 cm”; it is understood that a vehicle, such as the vehicle 606 shown in Fig. 15, is much larger than 30 or 50 cm, so the vehicle 606 must have been detected in at least two or more scanning layers) at a distance remote from a first safe range (Figs. 17 and 18, d1 and d1’ which show the closest point on the ground within the field of regard) up to a second safe range (Fig. 15, boundary of distant region 608, beyond which includes objects beyond the maximum range of the lidar system; the maximum range of the lidar system corresponding to the “second safe range”. It is seen that vehicle 606 is at a distance remote from the first safe range because it is farther away than the closest point on the ground within the field of regard, but it is also within a second safe range because it is closer than the other vehicles beyond the distant region which are beyond the maximum range of the lidar system).
Regarding Claim 10: Eichenholz discloses the optoelectronic sensor in accordance with claim 9. Eichenholz further discloses wherein the at least two or more scanning layers are adjacent to one another at a distance remote from a first safe range up to a second safe range (Fig. 15, vehicle 606 must be detected in two or more scanning layers considering [0112] says that the separation may be on the order of 10’s of centimeters, and the vehicle 606 is larger than the 30cm or 50cm that were cited as examples of separation distance between adjacent layers).
Regarding Claim 12: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the control and evaluation unit is further configured to detect at least one of a location and orientation of the ground using the bottommost scanning layer or a plurality of lower scanning layers ([0161] “a vehicle controller (e.g., the vehicle controller 372 of FIG. 10) can provide indications of where the ground is relative to the controller of the lidar system”; [0164] “the lidar system can identify the ground portion of the field of regard in terms of a single delimiter, such as the vertical angle at which the “horizon” occurs within the field of regard, i.e., where the ground ends and an area above ground begins in the absence of obstacles”).
Regarding Claim 17: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the optoelectronic sensor is configured as a safety sensor in the sense of a standard for safety of machinery or electrosensitive protective equipment ([0121] “The lidar system 100 may be part of a vehicle ADAS that provides adaptive cruise control, automated braking, automated parking, collision avoidance, alerts the driver to hazards or other vehicles, maintains the vehicle in the correct lane, or provides a warning if an object or another vehicle is in a blind spot”; the lidar system is meant to keep the vehicle, which is the machinery, safe from hazards or collisions).
Regarding Claim 18: Eichenholz discloses the optoelectronic sensor in accordance with claim 17. Eichenholz further discloses further comprising a safety output for the output of a safety directed securing signal ([0121] “a lidar system 100 may be part of an ADAS that provides information or feedback to a driver (e.g., to alert the driver to potential problems or hazards) or that automatically takes control of part of a vehicle (e.g., a braking system or a steering system) to avoid collisions or accidents”; the safety signal here is feedback to a driver or a signal to the braking and steering systems to control the vehicle, for example).
Regarding Claim 19: Eichenholz discloses the optoelectronic sensor in accordance with claim 17. Eichenholz further discloses wherein the safety sensor is configured as a safety laser scanner (Fig. 1, lidar system 100 with scanner 120; [0036] “The lidar system 100 may be referred to as a laser ranging system, a laser radar system, a LIDAR system, a lidar sensor, or a laser detection and ranging (LADAR or ladar) system”; it is understood that because lidar system 100, which is the safety sensor, uses lasers for ranging, that lidar system 100 is a safety laser scanner).
Regarding Claim 20: Claim 20 is essentially the method version of claim 1 and is therefore rejected for the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Eichenholz (US 20200025923 A1) in view of Yeruhami (US 20200249354 A1). Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the control and evaluation unit is further configured to trigger the safety directed response on a detection of the presence of the safety relevant object in a number of scanning layers ([0121] “a lidar system 100 may be part of an ADAS that provides information or feedback to a driver (e.g., to alert the driver to potential problems or hazards) or that automatically takes control of part of a vehicle (e.g., a braking system or a steering system) to avoid collisions or accidents”; [0122] “if the lidar system 100 detects a vehicle ahead that is slowing down or stopping, the autonomous-vehicle driving system may send instructions to release the accelerator and apply the brakes”). Eichenholz does not expressly teach: a number of scanning layers that is greater with a near object than with a far object.
However, Yeruhami teaches this with Fig. 19C where the pedestrian 1901 is closer to the lidar system and is detected in the bottom two layers 1 and 2, whereas vehicle 1903, which is further down the road, is only detected in layer 4 with sparse sensor responses to reflected light.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the control and evaluation system disclosed by Eichenholz to include the binning technique used by Yeruhami, where objects at a far distance, which may return fewer or weaker return signals, can be grouped into lidar FOV pixels. This would be beneficial because in cases where not all sensor pixels output signals indicative of received reflective light, for example for objects at a much greater distance, “binning of the sensor pixel outputs may increase detection confidence, signal to noise ratios, and/or accuracy of a distance determination relative to vehicle 1903” (Yeruhami, [0428]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Eichenholz (US 20200025923 A1) in view of Middleberg (US 20170118915 A1). Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the control and evaluation unit is further configured to trigger the safety directed response on determination of the presence of the safety relevant object in all relevant scanning layers ([0121] “a lidar system 100 may be part of an ADAS that provides information or feedback to a driver (e.g., to alert the driver to potential problems or hazards) or that automatically takes control of part of a vehicle (e.g., a braking system or a steering system) to avoid collisions or accidents”; [0122] “if the lidar system 100 detects a vehicle ahead that is slowing down or stopping, the autonomous-vehicle driving system may send instructions to release the accelerator and apply the brakes”). Eichenholz does not expressly teach for objects at a distance up to a first safe range.
However, Middleberg teaches this limitation with Figs. 2 and 3, where the first safe range is indicated by close distance range 37. Furthermore, paragraph [0054] recites: “a method is provided in accordance with the agricultural work machine 1, in which the control and regulating device 33 prevents a collision with obstacles 65 by means of results signals 50 of the scanning planes 43, wherein the agricultural work machine 1 is automatically braked, redirected …” where scanning plane 43 is in the close range 37. Furthermore, in paragraph [0056] Middleberg also teaches that for result signals detected from the long range 39 by scanning plane 47, the control and regulating device 33 allows for avoiding the obstacle by steering the system in a timely manner.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the controller disclosed by Eichenholz, such that the field of view is divided into ranges, and obstacles can be detected in each of the ranges, and based on the type of obstacle and where it is located, the appropriate response is triggered, as taught by Middleberg. This is beneficial because it enables the system to “steer the agricultural work machine 1 around the large obstacle 66 in a timely manner” when the obstacle is far from the device (Middleberg, [0056]). For small obstacles 67 that are detected in the close range, the control and regulating device 33 sends steering and control signals which lead to either a raising of the cutting unit or the immediate stopping of the work machine depending on the height of the obstacle (Middleberg [0057]). This enables the controller to determine an appropriate response based on the situation.
Claims 11 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Eichenholz (US 20200025923 A1) in view of Gimpel (EP 3517999 A1).
Regarding Claim 11: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz does not explicitly disclose wherein the control and evaluation unit is configured for a protected field evaluation in which an object is only safety relevant when its position is disposed in a configured protected field.
However, Gimpel teaches this limitation in [0040]: “The sensor 10 can also be designed as a safety sensor for use in safety technology for monitoring a danger source, such as is represented, for example, by a hazardous machine. In this case, a protected area is monitored which must not be entered by the operating personnel during operation of the machine. If the sensor 10 detects an impermissible protected field intervention, for example a leg of an operator, then it triggers an emergency stop of the machine.”
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the sensor disclosed by Eichenholz, such that it can be used for monitoring an area around a hazardous machine as taught by Gimpel. Eichenholz discloses a sensor that is used in a vehicle for improving vehicle safety, and Gimpel teaches a sensor used in hazardous machinery for improving safety. “Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art” (See MPEP 2141.III KSR Rationale F).
Regarding Claim 21: Eichenholz discloses an optoelectronic sensor for detection of objects in a monitored zone (Fig. 7, lidar system 100 for detecting objects in the field of regard, referred to as FOR) comprising:
at least one light transmitter for transmitting a plurality of mutually separated light beams (Fig. 1, light source 110 in lidar system 100; Fig. 7, with beams 250A through 250N; [0112] “lidar system 100 may angularly separate the beams 250A, 250B, 250C, . . . 250N”);
at least one light receiver for generating a respective received signal from the light beams remitted in the monitored zone (Fig. 1, receiver 140; [0113] “each of the linear scan patterns 254A-N includes pixels associated with one or more laser pulses and distance measurements”);
a movable deflection unit (Fig. 1, scanner 120) for periodically guiding the transmitted light beams through the monitored zone to respectively scan a scanning layer during movement of the movable deflection unit by the separated light beams (Fig. 7, scanning patterns 252A through 252N, each of which are a layer; [0112] “the lidar system generates output beams 250A, 250B, 250C, . . . 250N etc., each of which follows a linear scan pattern 254A, 254B, 254C, . . . 254N”); and
a control and evaluation unit that is configured to acquire information on the objects in the monitored zone from the respective received signal (Fig. 1, controller 150),
wherein the control and evaluation unit is further configured to determine a presence of a safety relevant object per scanning layer, such that each of the scanning layers is individually evaluated to determine the presence of the safety relevant object therein (Figs. 19 and 20, scan patterns 700 and 710 are measured in rows 702, 704, 712, and 714. The rows, or layers, representative of a ground are scanned more densely, as illustrated by rows 704 and 714. The field of view is grouped into different rows, or in other words: scanning layers, and the decision to scan more densely is applied to the individual layers. [0177-0178]; [0081] each point or pixel in the image is directly mapped to a point within the field of regard where a measurement was taken. If an object position is identified, the scanning layer in which that obstruction is located is also inherently known, since it can be mapped to a pixel with a particular row and column), the control and evaluation unit is further configured to decide whether a safety directed response is triggered by a common evaluation of the presence of the safety relevant object determined per scanning layer ([0134] vehicle controller receives data and identifies targets with locations, distances, speeds, shapes, etc. Based on this analyzed data and identified objects, a common evaluation is made as to how the vehicle should proceed. When an intersection is identified, it can be determined that this is the appropriate place for a turn. Likewise, when a stoplight is identified, it is determined that the vehicle must come to a stop),
wherein the safety directed response is triggered on detection of a presence of the safety relevant object in a plurality of the scanning layers ([0122] “if the lidar system 100 detects a vehicle ahead that is slowing down or stopping, the autonomous-vehicle driving system may send instructions to release the accelerator and apply the brakes”; Referring to Fig. 7, [0112] “the separation between beams 250A and 250B at a certain distance may be 30 cm, and the separation between the same beams 250A and 250B at a longer distance may be 50 cm”; it is understood that a vehicle, such as the vehicle 606 shown in Fig. 15, is much larger than 30 or 50 cm, so the vehicle 606 must have been detected in a plurality of scanning layers), and
wherein the safety directed response is further triggered on detection of the presence of the safety relevant object in only a bottommost scanning layer above a ground ([0134] vehicle controller receives data and identifies targets with locations, distances, speeds, shapes, etc. Based on this analyzed data and identified objects, a common evaluation is made as to how the vehicle should proceed. This means objects that are small or that may be at the periphery of the field of view are still identified). In a case where the object is only detected in the bottommost layer, a person of ordinary skill in the art would conclude that the device disclosed by Eichenholz would still respond in the same way, and if this detection is determined to be a hazard, a safety directed response would still be triggered.
Eichenholz does not expressly disclose the presence of the safety relevant object being determined by detection of the safety relevant object within a protected field defined by each of the scanning layers.
However, Gimpel teaches this limitation in [0040]: “The sensor 10 can also be designed as a safety sensor for use in safety technology for monitoring a danger source, such as is represented, for example, by a hazardous machine. In this case, a protected area is monitored which must not be entered by the operating personnel during operation of the machine. If the sensor 10 detects an impermissible protected field intervention, for example a leg of an operator, then it triggers an emergency stop of the machine.” Since the sensor is configured to monitor a safety zone, the sensor’s field of view would be the protected field.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the sensor disclosed by Eichenholz, such that it can be used for monitoring an area around a hazardous machine as taught by Gimpel. Eichenholz discloses a sensor that is used in a vehicle for improving vehicle safety, and Gimpel teaches a sensor used in hazardous machinery for improving safety. “Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art” (See MPEP 2141.III KSR Rationale F). Since this modification would use the device disclosed by Eichenholz, to monitor a safety zone, as taught by Gimpel, the entire field of view is the protected field. This means the entire zone scanned by the scanning layers disclosed by Eichenholz is defined by the area scanned by each of the scanning layers.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Eichenholz (US 20200025923 A1) in view of Hughes (US 20190154816 A1). Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz does not explicitly disclose wherein the control and evaluation unit is further configured to only include objects up to a minimum height above the ground for the determination of the presence of the safety relevant object.
However, Hughes teaches this limitation in Fig. 38 with method for modifying field of regard 700. Figs. 39A and 39B show examples of the modified field of regard that is chosen, indicated by solid lines, within the field of regard that is available, which is indicated by dashed lines. As described by steps 704, 706, and 708, the field of regard is adjusted up or down to account for a road segment with a grade. In Fig. 39A, the chosen field of regard is lower, and objects that would be at a height within the available field of regard but above the chosen field of regard are not included in the detection area.
It would have been obvious to a person of ordinary skill in the art to modify the sensor disclosed by Eichenholz, such that the field of regard can be chosen in response to changes in grade of road segments, as taught by Hughes. Although objects that may fall within the upper bound of the field of regard could be detected, by moving the operational field of view downward, in the case of an upcoming downward slope, the lidar sensor can better “see” along the surface of the road (Hughes, [0211]).
Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Eichenholz (US 20200025923 A1) in view of Plasberg (EP 1927867 A1).
Regarding Claim 14: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz does not explicitly disclose wherein the control and evaluation unit is further configured to determine a height above the ground depending on the distance and the scanning layer.
However, Plasberg teaches this limitation in Fig. 14 with object 70 and in paragraph [0041]: “The height of the object can be determined based on the number and spacing of the planes traversed by the object, and the other dimensions can be determined based on the distances within a plane.”
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the controller disclosed by Eichenholz such that it is able to determine a height of an object based on the amount of scanning layers and the distance to the object, as taught by Plasberg. Applying this known technique for determining the height of objects in a sensor that utilizes scanning layers for sensing the environment, as taught by Plasberg, to the lidar system that scans the environment with scanning layers disclosed by Eichenholz, would be the application of a known technique to a known device ready for improvement to yield the predictable results (See MPEP 2141.III KSR Rationale D).
Regarding Claim 15: Eichenholz discloses the optoelectronic sensor in accordance with claim 1. Eichenholz further discloses wherein the scanning layers have an angular resolution (Fig. 19, scan lines 704A-C, on the ground are much closer together vertically than scanning lines above ground 702A-C; Fig. 20, scan lines 714A-C are also much closer together horizontally than scanning lines above the ground 712A-C). Eichenholz does not expressly disclose: so that adjacent scanning layers have, at most, a distance corresponding to an object of a minimum size to be detected with respect to one another.
However, Plasberg teaches this in Fig. 1, with scanning layers 20 having spacings 22, and with paragraph [0045]: “The distance 22 between the planes 20 is selected according to the size of the object to be detected. The distance 22 can be approximately 7-10 mm for the detection of fingers, 10-20 mm for the detection of extremities, or 30-80 mm for the detection of the lower extremities of a human body.”
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the sensor disclosed by Eichenholz, such that the scanning layers are separated according to a minimum object size to be detected, as taught by Plasberg. This is beneficial because different heights may present different types of objects that need to be detected, and the distances need not be regular. This means that the spacing between planes can be chosen accordingly; for example, at a working height of a machine where hands would be used, a finer spacing between the layers can be chosen, whereas the spacing near the ground, where only feet are expected, can be larger (Plasberg, [0045]).
Regarding Claim 16: Eichenholz, in view of Plasberg, teaches the optoelectronic sensor in accordance with claim 15.
In this combination, Plasberg further teaches wherein the angular resolution is defined by a predetermined threshold in paragraph [0045], by stating that the spacing between layers can become finer based on proximity to source of danger. A person of ordinary skill in the art would know that the desired angular resolution can be determined based on a ratio of minimum object size to range, using simple trigonometric relationships. By stating that the spacing between layers can become finer based on proximity to danger, Plasberg teaches that the desired angular resolution can be defined by both plane separation and distance to the object.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to further modify the sensor disclosed by Eichenholz, such that both the minimum size of the object to be detected and the distance of that object are taken into account when determining spacing between layers, as taught by Plasberg. A person of ordinary skill in the art would readily be able to make the connection between (1) the angular separation of layers, (2) the height, and (3) the distance of the object to be sensed through basic trigonometry. Therefore, having an angular resolution threshold based on desired spacing between the layers would be motivated by the desire to detect an object having a minimum size (Plasberg, [0045]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISABELLE LIN BOEGHOLM whose telephone number is (571)270-0570. The examiner can normally be reached Monday-Thursday 7:30am-5pm, Fridays 8am-12pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISABELLE LIN BOEGHOLM/Examiner, Art Unit 3645
/YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645