DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 1 recites the limitation "the parking spaces" in lines 8 and 10. There is insufficient antecedent basis for this limitation in the claim. Examiner suggests replacing "the parking spaces" with –the one or more parking spaces--.
Claim 15 recites the limitation "the parking spaces" in lines 5 and 6. There is insufficient antecedent basis for this limitation in the claim. Examiner suggests replacing "the parking spaces" with –the one or more parking spaces--.
Claims 2-14 and 16-20 are also rejected based on their dependency of the defected parent claims 1 and 15 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 6-8, 10-15, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ayyappan, EP 3284649A1 in view of Ip et al., US 2023/0195854.
Regarding claim 1, Ayyappan discloses a system for vehicle environment detection (Abstract; a driver assistance device (1)) comprising:
a vehicle (Abstract; motor vehicle (2)) comprising a camera operable to generate an image of an environment surrounding the vehicle (para 0010 and 0029; sensor 5 comprises one or more ultrasonic sensors or one or more laser scanners (i.e., camera) captures environmental data of the motor vehicle 2), the environment comprising one or more parking spaces and an object removably attached to the vehicle (para 0029, 0034, and 0037; sensor 5 generating an image of parking space 10 in surrounding environment 11 of vehicle 2 and trailer 3 that is removably attached to vehicle 2); and
one or more processors (Abstract; para 0040; a computing device 4 is capable of having a processor) operable to:
identify the object (Abstract; para 0034; a trailer 3 is attached to the rear of the motor vehicle 2);
generate, (para 0037; generating a map based on the captured environment data);
generate a boundary (Abstract; a boundary (10’)) of the parking spaces (Abstract; parking space (10)) based on the (para 0027 and 0040; in the map…the boundary 10' of the parking space 10 is determined);
determine whether a distance between the boundary of the parking spaces and the vehicle is less than a threshold value (para 0013 and 0040-0041; determining the distance between the overall contour 15 and the boundary 10’ of parking space 10); and
output an alert in response to determining that the distance is less than the threshold value (para 0013 and 0040-0041; if the determined distance is less than a predetermined minimum distance, a warning message is output to the driver by the driver assistance device).
Ayyappan disclose claim 1 as enumerated above, but Ayyappan does not explicitly disclose generate, using a pre-trained depth algorithm, a depth map based on the image and generate a boundary of the parking spaces based on the depth map excluding the object as claimed.
However, Ip discloses the vehicle will attain images of the environment surrounding the vehicle with the camera 44 and use a depth map generator as indicated at 62 to develop a depth map of the surrounding environment. A depth map is generated from the image by associating a distance to each pixel within the image. The depth map and the point cloud maps are fused together and any dynamic objects filtered out (i.e., remove) from the final map used for operation of the vehicle. Dynamic structures include moving objects such as pedestrians 58, bicycles 56, as well as the trailer 54′ (Abstract; para 0048-0049 and 0053).
Therefore, taking the combined disclosures of Ayyappan and Ip as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the vehicle will attain images of the environment surrounding the vehicle with the camera 44 and use a depth map generator as indicated at 62 to develop a depth map of the surrounding environment. A depth map is generated from the image by associating a distance to each pixel within the image. The depth map and the point cloud maps are fused together and any dynamic objects filtered out (i.e., remove) from the final map used for operation of the vehicle. Dynamic structures include moving objects such as pedestrians 58, bicycles 56, as well as the trailer 54′ as taught by Ip into the invention of Ayyappan for the benefit of identification and removal of moving objects from the map to obtain accurate maps (Ayyappan: para 0002).
Regarding claim 2, the system of claim 1, Ip in the combination further disclose wherein the object is identified from the image using one or more pre-trained real-time object detection models (fig. 3, element 45; para 0042).
Regarding claim 6, the system of claim 1, Ip in the combination further disclose wherein the vehicle comprises one or more steering sensors configured to generate a real-time trajectory of the vehicle (fig. 1; para 0041-0042).
Regarding claim 7, the system of claim 6, Ip in the combination further disclose wherein the camera continuously generates images in a sequence of time frames, and the one or more processors are further operable to identify the object based on a relative motion of the object against the vehicle and the real-time trajectory of the vehicle (para 0048-0049 and 0053).
Regarding claim 8, the system of claim 6, Ip in the combination further disclose wherein the one or more steering sensors comprise a steering angle sensor, a vehicle speed sensor, a gyroscope, or a combination thereof (fig. 1; para 0041-0042).
Regarding claim 10, the system of claim 1, Ayyappan in the combination further disclose wherein the one or more processors are further operable to operate the vehicle to avoid a collision between the vehicle and the parking spaces in response to determining that the distance is less than the threshold value (para 0013-0014 and 0040-0041).
Regarding claim 11, the system of claim 1, Ayyappan in the combination further disclose wherein the boundary of the parking spaces is two-dimensional or three-dimensional (figs. 1-4; the boundary of the parking space 10 is two-dimensional).
Regarding claim 12, the system of claim 1, Ip in the combination further disclose wherein the camera is a monocular camera, a red-green-blue (RGB) camera, or a red-green-blue-depth (RGB-D) camera (fig. 3, element 44; para 0041).
Regarding claim 13, the system of claim 1, Ip in the combination further disclose wherein the camera is a rearview camera, a side-view camera, a front-view camera, or a top-mounted camera (fig. 1, element 44; para 0041; the camera system 44 comprises four cameras disposed on each side of the vehicle 22).
Regarding claim 14, the system of claim 1, Ayyappan in the combination further disclose wherein the parking spaces comprises a parking stall, markings, wheel stops, or a combination thereof (para 0022 and 0040).
Regarding claim 15, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Regarding claim 18, this claim recites substantially the same limitations that are performed by claims 6-7 above, and it is rejected for the same reasons.
Regarding claim 20, this claim recites substantially the same limitations that are performed by claim 10 above, and it is rejected for the same reasons.
Allowable Subject Matter
Claims 3-5, 9, 16-17, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Park, US 2024/0394860 discloses a captured image of a target area corresponding to a parking lot is captured through a camera and generate a depth map for the captured image.
Ramirez Llanos et al., US 2023/0192122 discloses a method and system for locating and tracking a trailer coupler for autonomous vehicle operation.
Miller et al., US 2025/0197281 discloses a system for backing a vehicle and a trailer.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAN D HUYNH/Primary Examiner, Art Unit 2665