DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 12-15 and 24-29 are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Meijburg et al (US 20210261152 A1).
Referring to claims 12-13:
Meijburg et al disclose an apparatus for determining an estimated value of a distance of a light signal transmitter from a motor vehicle (par. 121), wherein the apparatus is configured to:
recognize a set of objects in surroundings of the light signal transmitter arranged in a direction of travel in front of the motor vehicle based on image data of a camera of the motor vehicle (par. 94: TLD system uses one or more cameras to obtain information about traffic lights, street signs, and other physical objects that provide visual operation information);
assign an object from the set of objects to the signal transmitter (par. 116-117: TLD detects the “traffic light” hardware, i.e., what is transmitting the traffic signal);
determine an individual estimated value of the distance of the assigned object from the motor vehicle (par. 153: circuit 1320 (of the TLD system 1300) determines that the AV 100 is located at a particular distance D1 from the stop line 1452 of the traffic light 1404);
determine the estimated value of the distance of the light signal transmitter based on the individual estimated value of the distance of the assigned object from the motor vehicle (par. 153: circuit 1320 (of the TLD system 1300) determines that the AV 100 is located at a particular distance D1 from the stop line 1452 of the traffic light 1404);
determine an individual estimated value of the distance of the light signal transmitter from the motor vehicle based on the image data (par. 121: circuit 1324a performs image processing and recognition functions on the digital video stream 1308a to generate data 1326a identifying the traffic light 1404; and par. 153: circuit 1320 (of the TLD system 1300) determines that the AV 100 is located at a particular distance D1 from the stop line 1452 of the traffic light 1404); and
determine the estimated value of the distance of the light signal transmitter based on the individual estimated value of the distance of the light signal transmitter from the motor vehicle (par. 153: circuit 1320 (of the TLD system 1300) determines that the AV 100 is located at a particular distance D1 from the stop line 1452 of the traffic light 1404).
Referring to claims 14-15:
Meijburg et al disclose the above apparatus is configured to:
assign the object from the set of objects based on a machine-trained assignment unit to the signal transmitter; and the machine-trained assignment unit includes a trained, artificial neural network. (par. 128: circuit 1320 includes a machine learning model executed by processors within the circuit 1320).
Referring to claims 24-27:
Meijburg et al disclose the above apparatus is configured to longitudinally guide the motor vehicle in an automated manner as a function of the determined estimated value of the distance of the light signal transmitter, determine a signaling state of the light signal transmitter based on the image data; and automatically decelerate the motor vehicle as a function of the signaling state in order to bring the motor vehicle to a standstill before the light signal transmitter, or to automatically guide it past the light signal transmitter. (abstract/summary: control circuit of the vehicle operates the vehicle in accordance with a traffic signal of a traffic light, including stopping the vehicle at the stop line within a particular amount of time in accordance with a comfort profile, and in accordance with the determined trajectory; par. 69: AV 100 algorithmically generate control actions based on both real-time sensor data and prior information, allowing the AV system 120 to execute its autonomous driving capabilities; and see also par. 90, 109-115).
Referring to claim 28:
Meijburg et al disclose the set of objects comprises ground markings, including a stopping line, in the surroundings of the signal transmitter (Fig. 1, lane markings 1464 and stop line 1452), a traffic sign in the surroundings of the signal transmitter (par. 158: circuit 1320 can detect a traffic sign at the intersection 1416), an intersecting lane at a node point at which the signal transmitter is arranged (Fig. 14), another vehicle standing at the signal transmitter (Fig. 14), and/or a mast, on which the signal transmitter is fastened (it is implied that traffic lights are fastened to a mast or pole in one manner or another).
Referring to claim 29:
This claim is the method for performing the corresponding functions of the apparatus as set forth in claim 1 and is therefore rejected for the same reasons as presented above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Meijburg et al as applied to claims 12-13 above, and further in view of Morphet et al (US 20140348238 A1).
Referring to claims 16-17:
Meijburg et al do not disclose the above apparatus is configured to determine the individual estimated value of the distance of the assigned object from the motor vehicle according to a structure-from-motion method based on the image data. However, this is a well-known feature in the prior art. For example, Morphet et al discloses that it is known, where a video sequence has been produced using a conventional 2D video camera and camera position and depth information is not normally available, it is possible to approximate camera location, orientation and distance to objects in a scene by using "Structure from Motion" (SFM) techniques in the field of Computer Vision (par. 13).
It would have been prima facie obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Meijburg et al in view of Morphet et al to have included SFM techniques to estimated value of the distance of the assigned object from the motor vehicle in order to achieve the advantage of accurate distance estimation quickly with a low-cost implementation using low-cost cameras.
Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Meijburg et al as applied to claims 12-13 above, and further in view of Deegan et al (US 20210287387 A1).
Referring to claims 18-19:
Meijburg et al do not disclose the above apparatus is configured to determine sensor data with respect to the assigned object based on one or more surroundings sensors of the motor vehicle including a lidar sensor and/or a radar sensor, and determine the individual estimated value of the distance of the assigned object from the motor vehicle based on the sensor data of the one or more surroundings sensors. However, this is a well-known feature in the prior art. For example, Deegan et al disclose distance estimation of image objects such as vehicles, people, signs using light detection and ranging (LiDAR) data (par. 16, 19).
It would have been prima facie obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Meijburg et al in view of Deegan et al to have included using LiDAR sensor data to estimate the distance of the assigned object from the motor vehicle in order to achieve the advantage of improved distance estimations of image objects, the quality and experience of gathering and using data available from various sources thereby eliminating restrictions of operating one particular hardware arrangement.
Claims 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Meijburg et al as applied to claims 12-13 above, and further in view of Yamamoto et al (US 20050111697 A1).
Referring to claims 18-19:
Meijburg et al do not disclose the image data comprise a sequence of chronologically successive images, the apparatus being configured to determine an optical flow on the basis of the sequence of images, and determine the individual estimated value of the distance of the assigned object from the motor vehicle based on the optical flow. However, determining optical flow from a sequence of images and using that to estimate the distance of an object from a motor vehicle is well-known in the prior art. For example, Yamamoto et al computes relative positions of an object and a mobile unit based on optical flows obtained from reference plane image data and object image data.
It would have been prima facie obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Meijburg et al in view of Yamamoto et al to estimate the distance of the assigned object from the motor vehicle based on the optical flow in order provide a low-cost monocular-based but accurate estimate of the distance between an object and vehicle in real-time.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-13, 22-23, and 29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 12-13 and 29, repeat limitations regarding determining the distance of the assigned object, i.e., the light signal transmitter, from the motor vehicle which do not further limit the claim.
Claims 22-23, which depends from claim 12, recites determining the distance of the assigned object from the light signal transmitter, but the assigned object in claims 12 and 13 is the light signal transmitter. No interpretation of these claims has been made for applying prior art based on this indefinite claim language.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 22 April 2024 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the IDS has been considered by the examiner.
The relevance of the cited document(s), in addition to any applied above, can be found in the International Search Reports and/or Written Opinions from the ISA dated 02 March 2023 for PCT/EP2022/080636 and in German Application No. 10 2021 128 785 (all of record).
Of note, besides Meijburg et al applied above, Mielenz et al (DE 102015216979 A1) disclose a method for operating a driver assistance system of a vehicle (100). In the method, first of all a detection signal (108) is read, which represents a traffic scenario (102, 104) recognized by means of a field read-in device (106) of the vehicle (100). In a further step, at least one detection algorithm of the driver assistance system is activated using the detection signal (108). Finally, using the detection algorithm, at least one object (112, 113, 114, 116, 117) of an object type associated with the traffic scenario (102, 104) is detected to locate the vehicle (100) in the traffic scenario (102, 104).
Cited Art
The prior art and other references made of record and not relied upon are considered pertinent to applicant's disclosure.
Sasaki et al (US 6445809 B1) discloses a monitoring system using an optical flow, an optical-flow detecting device detects the optical flow in the steps of reverse-projection converting the early image of two early and later images picked up by an image picked up, on the basis of a predetermined optical arrangement of the image pick-up means onto an x-z plane in parallel to a road surface in a real space to acquire a road surface image; computing a moving distance of one's own vehicle in two timings on the basis of a time interval between two timings and speed information of one's own vehicle; parallel-shifting the road surface image by the moving distance thus computed; projection-converting the road surface image after parallel-shifted to acquire an estimated image of the later image in timing; acquiring a differential image between the later image and its estimated image to extract a feature point; and searching a corresponding point of the feature point extracted. In this configuration, it is possible to prevent an object such as a paint which should not be detected on a road surface from being erroneously detected as another surrounding vehicle.
Won et al (US 9798951 B2) disclose an apparatus for measuring a distance change, the apparatus including an information acquisition unit, an object determination unit, a feature point determination unit, an optical flow calculator, a matching point determination unit, an object length change calculator that calculates a length change ratio between an object of a first frame image and an object of a second frame image by using a feature point and a matching point, and a distance change calculator that calculates a change from a distance between a camera and the object from when the camera acquires the first frame image and when the camera acquires the second frame image using the calculated length change ratio.
Von Radziewsky (US 12505676 B2) disclose an automated driving function in a motor vehicle comprises: a processor circuit of the motor vehicle recognizes respective individual images of an environment of the motor vehicle from sensor data of a least one sensor of the motor vehicle by means of at least one object classifier. At least one relational classifier using the object data for at least some of the individual objects additionally recognizes a respective pairwise object relation with the aid of predetermined relation features of the individual objects in the respective individual image determined from the sensor data, which relation is described by relational data, and an aggregation module is used to aggregate the relational data throughout multiple consecutive individual images to produce aggregation data, which describe aggregated object relations.
Izzat et al (US 20170285161 A1) disclose an object-detection system for detecting an object proximate to a vehicle. The system (900) has a radar-sensor (904) for mounting on a vehicle (924) and detecting a radar-signal (926) reflected by an object (902) in a radar-field-of-view (906). A camera (908) captures an image of a camera-field-of-view (910) to overlap the radar-field-of-view. A controller (912) is in communication with the radar-sensor and the camera, where the controller determines a range-map for the image based on the range and the direction, defines a detection-zone in the image based on the range-map and processes the detection-zone of the image to determine an identity of the object. The system uses a LiDAR to provide more accurate and denser data resulting in better performance than Radar. The system uses the Radar or LiDAR to provide accurate range and range-rate and enhances object detection system.
Sekiguchi et al (US 20200233087 A1) disclose a method that involves extracting a signal component from a beat signal obtained by synthesizing a transmission wave irradiated onto the target object from a vehicle (200) and a reflection wave reflected and received from the target object. A matching evaluation value of image data of the target object captured by an imaging device is generated. The signal component and the matching evaluation value are fused before generating a distance image from the matching evaluation value. Distance information for each pixel of the image data of the target object is set based on the signal component and the matching evaluation value fused together to generate a distance image. The stereo camera having higher spatial resolution and lower distance resolution, and the system having lower spatial resolution and higher distance resolution, such as the light detection and ranging (LIDAR), can be fused effectively to provide the range finding method that can effectively combine or integrate benefits of different types of the range finding or detection devices.
Nagori et al (US 20210088331 A1) describe goal of SFM is to recover the three dimensional (3D) environment in the field of view (FOV) of the camera. More specifically, in automotive applications, one goal of SFM is to determine the distance of objects in the FOV from a vehicle (par. 17-18)
Zhang et al (US 20230085024 A1) disclose the distance of each object (such as distance from the vehicle) is calculated using, for example, a SFM (structure from motion) method (par. 21).
Miyake (US 20230148097 A1) disclose a camera calculates a relative distance and a direction from the vehicle of the planimetric feature such as a landmark and a lane marking, from an image including Structure from Motion (SFM) information (par. 47).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott Rogers whose telephone number is 571-272-7467. The examiner can normally be reached 8 am to 7 pm flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached on 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Scott A Rogers/
Primary Examiner, Art Unit 2683
05 March 2026