DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the filing of U.S. Patent Application No. 18/711,941, filed on May 21, 2024. Claims 1-19 are presently pending and are presented for examination.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) for PCT Patent Application No. PCT/EP2022/081186, filed November 9, 2022, is acknowledged and accepted.
Acknowledgment is made of applicant's claim for foreign priority based on German Patent Application No. DE10 2021 130 882.8, filed November 25, 2021.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on August 5, 2024 is in compliance with the provisions of 37 CFT 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification Objections
Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts.
The abstract of the disclosure is objected to because it exceeds 150 words. Correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 5, 7-11, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2016/0366336, to Kuehnle et al. (hereinafter Kuehnle), in view of U.S. Patent No. 11,341,614, to Chen et al. (hereinafter Chen).
As per claim 1, Kuehnle discloses a driver assistance system for a utility vehicle with a trailer (e.g. see Fig. 2, and para 0019, wherein a system 10 is provided including a tractor 12 and tailer 22), which can be used to observe and/or monitor a space located behind a driver's cab of the utility vehicle (e.g. see Fig. 2, and para 0022, wherein the tailer includes cameras 26, 28, 30 and 32), the system comprising: -at least one optical or acoustic sensor arranged behind the driver's cab of the utility vehicle, -wherein the at least one optical or acoustic sensor is configured to acquire images and image sequences within a field of view of the at least one optical or acoustic sensor (e.g. see Fig. 2, and paras 0019 and 0022, wherein the trailer includes cameras 26, 28, 30 and 32, which are disposed behind the tractor, for capturing video frames), -an image processor which is electrically connected to the at least one optical or acoustic sensor (e.g. see Fig. 1, and para 0019, wherein the trailer includes a central ECU 24, having a processor 38b, configured for receiving image data from the cameras, via a wired or wireless connection), -wherein image processing software for image data analysis and image compression is stored in the image processor (e.g. see Fig. 1, and para 0021, wherein the memory stores computer-executable instructions that stiches the images together received from the cameras, the Office notes that the stitched images would be stored, at least temporarily, before being transmitted)…; -a first wired or wireless data connection (e.g. see Fig. 1, and para 0019, wherein the trailer includes a transceiver 34 to wirelessly communicate with a central ECU 14 of the tractor); -an electronic controller which has a data input side and a data output side and is arranged separately from the image processor (e.g. see Fig. 1, and paras 0019 and 0023, wherein the central ECU 14 of the tractor (i.e. electronic controller arranged separately from the trailer central ECU 24) includes transceiver 36 (i.e. data input side) for receiving image data from the trailer ECU 24 and an output (represented by a line and arrow) (i.e. data output side) for transmitting image data from at least cameras 18 and 20 to display 16), -wherein for transmitting the object information generated by the image processor, the electronic controller is connected or configured to be connected to the image processor on the data input side via the first data connection (e.g. see Fig. 1, and paras 0019 and 0023, wherein the central ECU 14 of the tractor (i.e. electronic controller arranged separately from the trailer central ECU 24) includes transceiver 36 (i.e. data input side) for receiving image data from the trailer ECU 24); -a second wireless data connection (e.g. see Fig. 3, and paras 0026-0029, wherein in another embodiment, the system includes a portable computing device 150 that is in wireless communication (i.e. second wireless data connection) with the tractor central ECU 14 and trailer central ECU 24); -an electronic terminal with an electronic graphical user interface, which is positioned outside the trailer (e.g. see Fig. 3, and para 0029, wherein the portable computing device displays a composite trailer image based upon the sticked trailer image data), -wherein the terminal is wirelessly connected or configured to be connected to the data output side of the electronic controller via the second data connection (e.g. see Fig. 3, and paras 0026-0029, wherein in another embodiment, the system includes a portable computing device 150 that is in wireless communication (i.e. second wireless data connection) with the tractor central ECU 14 and trailer central ECU 24), and -wherein application software is configured to be installed on the terminal, by which the object information provided by the image processor and transferred to the terminal via the electronic controller is configured to be displayed on the user interface…(e.g. see para 0029, wherein the portable computing devices displays the stitched images from the central ECUs 14 and 24).
Kuehnle fails to disclose all of the features of and wherein the image processor is configured to detect objects by the acquired images or image sequences and analyze the acquired images or image sequences in terms of their size, position and movement in relation to a vehicle-fixed coordinate system and generate compressed object information from the acquired images or image sequences …and the user interface of the terminal displays images that are a graphically reduced, spatial or planar geometric representation. However, Chen teaches an image processor that determines location coordinates and movement of an object based upon the objects size and displays the image in a reduced form in a spatial or planar geometric representation, such as bounding boxes 452, 454 and 456 (e.g. see col. 18, line 57, to col. 19, line 5, and col. 31, lines 7-18). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include ascertaining size, position and movement of an object and represent the object in a simplistic form to allow for quick acquisition and improved knowledge of the object.
As per claim 2, Kuehnle, as modified by Chen, teaches the features of claim 1, and Kuehnle further discloses wherein: the driver assistance system is configured as a reversing assistance system and, when maneuvering the trailer, is configured to detect obstacles in a rear and/or lateral space of the trailer, and at the least one optical or acoustic sensor is arranged at a rear of the trailer, which sensorially detects the rear and/or a lateral space of the trailer (e.g. see Fig. 2, and para 0022, wherein the system includes rear cameras 30, 32 for ascertaining a surrounding image of the vehicle).
As per claim 5, Kuehnle, as modified by Chen, teaches the features of claim 1, and Kuehnle further discloses wherein the at least one optical or acoustic sensor is selected from the group consisting of a single image camera, video camera, time of flight (TOF) camera, stereo camera, radar sensor, lidar sensor, and ultrasonic sensor (e.g. see Fig. 2, and para 0022, wherein the tailer includes cameras 26, 28, 30 and 32).
As per claim 7, Kuehnle, as modified by Chen, teaches a method for controlling the driver assistance system for a trailer of a utility vehicle as claimed in claim 1 (e.g. see claim 1), wherein: -images and/or image sequences of the space observed or monitored by the at least one optical or acoustic sensor are acquired by the at least one optical or acoustic sensor when the driver assistance system is activated (e.g. the Office notes that for the driver assistance to work, the cameras must be operating), by the images and/or image sequences, objects arranged in the observed or monitored space are detected by the image processor and analyzed with regard to their size and position in relation to a vehicle-fixed coordinate system (e.g. see claim 1), -compressed object information is generated from the analysis the compressed object information containing the size and position of each object taken into account by the image processor (e.g. see claim 1), -object information is transferred to the terminal by the electronic controller (e.g. see claim 1), and -object information transferred to the terminal is displayed on the graphical user interface by the application software as a graphically reduced, spatial or planar representation, and the representation contains the detected objects as spatial geometric figures or as planar geometric figures, as well as their size, position and state of motion (e.g. see claim 1).
As per claim 8, Kuehnle, as modified by Chen, teaches the features of claim 7, and Chen further teaches wherein the spatial geometrical figures represented are cuboids, cylinders, pyramids, spheres or beams (e.g. see Fig. 7, wherein the bounding boxes comprise cubiods). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include representing objects in a simplistic form to allow for quick acquisition.
As per claim 9, Kuehnle, as modified by Chen, teaches the features of claim 7, and Chen further teaches wherein the planar geometrical figures represented are rectangles, triangles, circles, distance bars (e.g. see Fig. 7, wherein the bounding boxes comprise rectangles). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include representing objects in a simplistic form to allow for quick acquisition.
As per claim 10, Kuehnle, as modified by Chen, teaches the features of claim 7, and Kuehnle further discloses wherein the activation of the driver assistance system is automatic, sensor-controlled, event-controlled or manual (e.g. the Office notes that the system of Kuehnle would be automatic or manual).
As per claim 11, Kuehnle, as modified by Chen, teaches the features of claim 7, and Kuehnle further discloses wherein: the driver assistance system is a reversing assistance system, which, when maneuvering the trailer, is used configured to detect obstacles in a rear and/or a lateral space of the trailer (e.g. see Fig. 2, and para 0022, wherein the system includes rear cameras 30, 32 for ascertaining a surrounding image of the vehicle), and Chen further teaches wherein the field of view of the at least one optical or acoustic sensor is divided into sub-areas of different priority, and wherein in each sub-area, a predetermined number of obstacle- relevant objects are taken into account (e.g. see Fig. 7). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include subdividing sensor data to separate obstacles for improved observation of individual obstacles.
As per claim 18, Kuehnle, as modified by Chen, teaches the features of claim 7, and Chen further teaches wherein the driver can set a perspective angle of the display on the graphical user interface of the terminal by the application software or can select and switch between a spatial and a planar display. However, Gencaslan teaches the angle of display may be selected (e.g. see Figs. 9-12, and col. 21, lines 23-38). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include further changing the angle of view to allow a driver to ascertain surrounding environmental conditions for the purpose of improving safety.
As per claim 19, Kuehnle, as modified by Chen, teaches a utility vehicle with a trailer, comprising the driver assistance system of claim 1 (e.g. see Kuehnle, Fig. 2 and rejection of claim 1, wherein a vehicle 12 and trailer 22 are provided).
Claims 3, 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2016/0366336, to Kuehnle et al. (hereinafter Kuehnle), in view of U.S. Patent No. 11,341,614, to Chen et al. (hereinafter Chen), and in further view of U.S. Patent Publication No. 2021/0248398, to Gencaslan et al. (hereinafter Gencaslan).
As per claim 3, Kuehnle, as modified by Chen, teaches the features of claim 1, but fail to teach wherein: the driver assistance system is configured to be used as a load compartment monitoring system and to monitor a load compartment in the trailer, and the at least one optical or acoustic sensor is arranged in the load compartment. However, Gencaslan teaches a camera system for a trailer that include a camera in a compartment of the trailer to monitor it’s load (e.g. see Fig. 1, and Abstract). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include further monitoring a load 15-17 within a trailer compartment 12, with a camera 21, to ascertain load balance of the trailer as it can affect movement of the vehicle.
As per claim 6, Kuehnle, as modified by Chen, teaches the features of claim 1, but fails to disclose further comprising an artificial light source, which illuminates the field of view of the at least one optical sensor during acquisition of the image or sequence of images. However, Gencaslan teaches an artificial light source to illuminate a compartment during the capturing of images (e.g. see para 0021). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to further include a light source illuminating objects to improve accuracy of object determination.
As per claim 17, Kuehnle, as modified by Chen, teaches the features of claim 7, but fails to teach wherein the driver assistance system works as a load compartment monitoring system, wherein load occupancy, load shift and/or occupancy change is monitored and shown on the graphical user interface of the terminal. However, Gencaslan teaches a camera system for a trailer that include a camera in a compartment of the trailer to monitor it’s load, which is displayed (e.g. see Figs. 1 and 4, and Abstract). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include further monitoring a load 15-17 within a trailer compartment 12, with a camera 21, to ascertain load balance of the trailer as it can affect movement of the vehicle.
Claims 4, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2016/0366336, to Kuehnle et al. (hereinafter Kuehnle), in view of U.S. Patent No. 11,341,614, to Chen et al. (hereinafter Chen), and in further view of U.S. Patent Publication No. 2019/0162545, to Greenwood.
As per claim 4, Kuehnle, as modified by Chen, teaches the features of claim 1, but fails to teach wherein the electronic control unit is arranged in a front area of the trailer (e.g. see Fig. 2, and para 0022, wherein the system includes rear cameras 30, 32 for ascertaining a surrounding image of the vehicle). However, Greenwood teaches that controllers 4 and 37 may be disposed in a front of a vehicle (e.g. see Fig. 2), and further teaches that the controllers may be disposed in the front of a autonomous moving trailer (e.g. see Fig. 12, para 0151). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include mounting a controller in the front region of a trailer as a matter of designer’s choice.
As per claim 14, Kuehnle, as modified by Chen, teaches the features of claim 7, but fails to teach wherein a grid or line pattern is displayed on the graphical user interface of the terminal, which is projected onto a displayed base . However, Greenwood teaches a display 366, wherein a grid pattern is projected (e.g. see Fig. 26 and para 0184). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include an overlayed pattern to assist in provided a spatialized projection of surrounding objects and the vehicle.
As per claim 15, Kuehnle, as modified by Chen, teaches the features of claim 7, but fail to teach wherein a real background image is displayed on the graphical user interface of the terminal, into which the geometrical figures, which represent the detected objects, are projected. However, Greenwood teaches a display 366 that displays images captured from a camera that is augmented with detected objects 383. It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include an utilizing a augmented image for displaying the surroundings of a vehicle for the purpose of simplifying information being presented to a driver.
Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2016/0366336, to Kuehnle et al. (hereinafter Kuehnle), in view of U.S. Patent No. 11,341,614, to Chen et al. (hereinafter Chen), and in further view of U.S. Patent Publication No. 2022/0180488, to Nakagawa et al. (hereinafter Nakagawa).
As per claim 12, Kuehnle, as modified by Chen, teaches the features of claim 7, but fails to teach wherein: in the event of a possible rearward collision with an obstacle in a predicted travel path of the utility vehicle, a visual, acoustic and/or haptic collision warning is issued by means of the application software of the terminal, and automatic emergency braking is initiated by means of a trailer braking system in order to avoid a collision. However, Nakagawa teaches automatic application of trailer brakes, and warning system, to avoid collision based upon captured images, including when the vehicle is moving in reverse (e.g. see Figs. 4A-5C, paras 0012, 0059-0062 and claim 5). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include a collision avoidance system when the vehicle is in reverse so as to improve safety to surrounding individual and objects.
As per claim 13, Kuehnle, as modified by Chen, teaches the features of claim 7, but fails to teach wherein the geometric figures and/or a frame and/or background of the user interface appearing on the graphical user interface of the terminal are displayed in different colors and/or in changing colors depending on the size, position and/or state of movement of the objects concerned and depending on a collision warning. However, Nakagawa teaches flashing colors in a user interface based upon a potential collision (e.g. see para 0061). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include a collision warning system comprising the display of different colors on a user interface for the purpose of gaining attention of a driver.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2016/0366336, to Kuehnle et al. (hereinafter Kuehnle), in view of U.S. Patent No. 11,341,614, to Chen et al. (hereinafter Chen), and in further view of U.S. Patent Publication No. 2021/0183332, to Myles et al. (hereinafter Myles).
As per claim 16, Kuehnle, as modified by Chen, teaches the features of claim 7, but fails to teach wherein a guard function is provided which monitors the data transmission and generates a warning message in the event of a significant interruption of the data transmission between the image processor and the electronic controller and/or between the electronic controller and the terminal. However, Myles teaches issuing a warning when video loss has occurred (e.g. see para 0048). It would have been obvious to a person of ordinary skill in the art at the time of Applicants’ invention to modify the system of Kuehnle to include issuing a warning when an image signal is lost to prepare a driver for manual operation.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James M. McPherson whose telephone number is (313) 446-6543. The examiner can normally be reached on 7:30 AM - 5PM Mon-Fri Eastern Alt Fri. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached on 571 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES M MCPHERSON/Primary Examiner, Art Unit 3663B