DETAILED ACTIONS
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/23/2026 has been entered.
Priority
Acknowledgment is made of applicant’s claim this application being a National Stage of the International Application No. PCT/EP2021/079043, filed on October 20, 2021, and benefit of foreign priority from German Patent Application No. DE10 2020 213 661.0 filed on October 30, 2020.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 01/12/2026 was reviewed and the listed references were noted.
Drawings
The 3-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 11-19 are pending. Claims 1-10 are canceled.
Response to Amendment
The amendment filed 12/23/2025 has been entered. Claims 11-19 remain pending in the
application.
Response to Arguments
Applicant’s arguments with respect to claim 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 11-12 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al., "A Multi-Classifier Image Based Vacant Parking Detection System" (2013), hereinafter referred to as Liu, in view of Horihata, US 2023/0118619 A1 (earliest effective filing date - 06/23/2020), hereinafter referred to as Horihata.
Claim 11
Liu discloses a method for analyzing surroundings of a motor vehicle (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space), comprising the following steps:
analyzing the surroundings multiple times to obtain multiple results (Liu, Fig. 2, 3 types of processing techniques, Section III, “we propose three different methods for vacant parking detection. The first detection method is a simple edge detection technique and an edge counting approach. The second technique is based on object counting, while the third method is based on foreground & background detection. Each of these techniques provides one decision independently. Then, we propose to integrate the results from these three methods into a final decision”) using (Liu, Fig. 2, 3 different analysis techniques is used, edge pixel counting, object counting, and statistical analysis), each of the multiple results indicating at least whether an object is located in the surroundings of the motor vehicle or not (Liu, Fig. 2, each processing techniques output if the parking space surrounding the vehicle is vacant or occupied, if it is occupied, it means an object is located in the surroundings of the motor vehicle); and
determining as an overall result that an object is located in the surroundings of the motor vehicle based on a majority of the multiple results indicating that an object is located in the surroundings of the motor vehicle (Liu, Fig. 2, final decision, Section IV, “To combine the decisions from the three methods, we simply use a majority voting rule.”, as shown in Fig. 2, if the majority of the 3 techniques has an occupied output then the final decision is that there is an object located in the surroundings of the vehicle, the surrounding being the parking space), and
determining as the overall result that no object is located in the surroundings of the motor vehicle based on a majority of the multiple results indicating that no object is located in the surroundings of the motor vehicle (Liu, Fig. 2, final decision, Section IV, “To combine the decisions from the three methods, we simply use a majority voting rule.”, as shown in Fig. 2, if the majority of the 3 techniques has a vacant output then the final decision is that there is no object located in the surroundings of the vehicle, the surrounding being the parking space),
the analyzing of the surroundings is carried out an odd number of times (Liu, Fig. 2, 3 processing techniques, three is an odd number) such that the multiple results comprise an odd number of individual results (Liu, Section III, “Each of these techniques provides one decision independently”, the three processing techniques has one each individual result which is also an odd number), and the determining of the overall result is based on a majority vote among the odd number of individual results (Liu, Section IV, “To combine the decisions from the three methods, we simply use a majority voting rule.”).
Liu does not explicitly disclose analyzing the surroundings multiple times to obtain multiple results using at least two different sensor technologies.
However, Horihata teaches analyzing the surroundings multiple times (Horihata, Abstract, “acquire at least one of vehicle behavior data indicative of a behavior of at least one vehicle and sensing information of a surrounding monitoring sensor mounted in the at least one vehicle in association with position information of the at least one vehicle”) to obtain multiple results using at least two different sensor technologies (Horihata, [0065], “For example, based on the image, the front camera 11 calculates a relative distance and direction (that is, a relative position) of the detection target object, such as a lane boundary, a roadside, and the vehicle, from the subject vehicle and a travel speed by using a structure from motion (SfM) process.”, [0066], “The millimeter wave radar 12 incorporates a radar ECU that identifies a type of the detection object, based on a size, a travel speed, and reception strength of the detection object.”, [0067], “when the front camera 11 and the millimeter wave radar 12 are the same, both of the front camera 11 and the millimeter wave radar 12 will also be referred to as surrounding monitoring sensors”, [0266], “The street parking presence-absence determination unit F51 may calculate a possibility that the street parking vehicle may actually exist, as detection reliability, based on a combination of whether the existence is detected by the front camera 11, whether the existence is detected by the millimeter wave radar 12, and whether the avoidance action is performed. For example, as illustrated in FIG. 22, a configuration may be adopted as follows. As viewpoints (sensors or behaviors) indicating the existence of the street parking vehicle increase, the possibility is calculated to have higher detection reliability. An aspect of determining the detection reliability illustrated in FIG. 22 is an example, and can be changed as appropriate.”).
PNG
media_image1.png
306
557
media_image1.png
Greyscale
Liu and Horihata are both considered to be analogous to the claimed invention because they are in the same field of vehicle surrounding detection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Liu to incorporate the teachings of Horihata of analyzing the surroundings multiple times to obtain multiple results using at least two different sensor technologies. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to more accurately verify whether the street parking vehicle still exists or has disappeared (Horihata, [0127]).
Claim 12
The combination of Liu in view of Horihata discloses the method according to claim 11 (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space), wherein the multiple analysis of the surroundings is carried out using at least one of the following analysis (Liu, Fig. 2, 3 types of processing techniques, Section III, “we propose three different methods for vacant parking detection. The first detection method is a simple edge detection technique and an edge counting approach. The second technique is based on object counting, while the third method is based on foreground & background detection. Each of these techniques provides one decision independently. Then, we propose to integrate the results from these three methods into a final decision”) arrangements: different computer architectures, different programming languages, different analysis methods, different developers of the analysis methods (Liu, Fig. 2, 3 types of processing techniques, the 3 type of processing techniques are different analysis methods).
Claim 13
The combination of Liu in view of Horihata discloses the method according to claim 11 (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space) wherein each of the multiple analysis of the surroundings (Horihata, [0042], “The method includes acquiring at least one of vehicle behavior data indicative of a behavior of at least one vehicle and sensing information of a surrounding monitoring sensor mounted in the at least one vehicle in association with position information; detecting, as a parking-stopping point, a street parking point which is a point where the parking-stopping vehicle is parked on a normal road; and determining whether the parking-stopping vehicle still exists at the detected parking-stopping point based on the acquired information”), is carried out using surroundings sensor data from different surroundings sensors that acquire information about the surroundings of the motor vehicle (Horihata, [0182], “When the street parking point report includes the camera image, the disappearance determination unit G32 may analyze the camera image to determine whether the street parking vehicle still exists. The disappearance determination unit G32 may statistically process an analysis result of the image data transmitted from multiple vehicles to determine whether the street parking vehicle still exists. The statistical processing here includes majority voting or averaging.”), the different surrounding sensors being surroundings sensors from different manufacturers and/or surroundings sensors based on different sensor technologies (Horihata, [0182] ,”When the street parking point report includes the camera image, the disappearance determination unit G32 may analyze the camera image to determine whether the street parking vehicle still exists. The disappearance determination unit G32 may statistically process an analysis result of the image data transmitted from multiple vehicles to determine whether the street parking vehicle still exists. The statistical processing here includes majority voting or averaging.”, if the camera came from different multiple vehicles means they are from different manufacturers, [0246], “As a device for detecting the street parking vehicle, LiDAR or sonar may be used. The devices are included in the surrounding monitoring sensors. The millimeter wave radar, the LiDAR, or the sonar can be called a distance measuring sensor. The map cooperation device 50 may be configured to detect the street parking vehicle by jointly using multiple types of the surrounding monitoring sensors. For example, the map cooperation device 50 may detect the street parking vehicle by sensor fusion.”).
Liu and Horihata are both considered to be analogous to the claimed invention because they are in the same field of vehicle surrounding detection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Liu to incorporate the teachings of Horihata wherein each of the multiple analysis of the surroundings is carried out using surroundings sensor data from different surroundings sensors that acquire information about the surroundings of the motor vehicle, the different surrounding sensors being surroundings sensors from different manufacturers and/or surroundings sensors based on different sensor technologies. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to more accurately verify whether the street parking vehicle still exists or has disappeared (Horihata, [0127]).
Claim 16
The combination of Liu in view of Horihata discloses the method according to claim 11 (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space), wherein each of the multiple analysis of the surroundings is carried out using surroundings sensor data from surroundings sensors that acquire information about the surroundings of the motor vehicle (Liu, Fig. 2, 3 types of processing techniques, Section III, “we propose three different methods for vacant parking detection. The first detection method is a simple edge detection technique and an edge counting approach. The second technique is based on object counting, while the third method is based on foreground & background detection. Each of these techniques provides one decision independently. Then, we propose to integrate the results from these three methods into a final decision”), wherein a first analysis of the multiple analysis of the surroundings is carried out using a first analysis method (Liu, Fig. 2, 3 types of processing techniques, Section III, “The first detection method is a simple edge detection technique and an edge counting”)and wherein a second analysis of the multiple analysis of the surroundings is carried out using a second analysis method (Liu, Fig. 2, 3 types of processing techniques, Section III, “The second technique is based on object counting, while the third method is based on foreground & background detection.”), and wherein the second analysis method is free from a comparison of respective surroundings sensor data with reference surroundings sensor data (Liu, Fig. 2, 3 types of processing techniques, Section III, “The second technique is based on object counting”, the object counting method only needs the image input to determine if the parking space is vacant or occupied, it does not need a comparison of the respective surrounding sensor data).
Liu does not explicitly disclose wherein the first analysis method includes a comparison of respective surroundings sensor data with reference surroundings sensor data to detect a change in the surroundings of the motor vehicle, wherein it is determined that an object is located in the surroundings when a change has been detected.
However, Horihata teaches wherein the first analysis method includes a comparison of respective surroundings sensor data with reference surroundings sensor data to detect a change in the surroundings of the motor vehicle, wherein it is determined that an object is located in the surroundings when a change has been detected (Horihata, [0243], “a configuration in which the in-vehicle system 1 detects the street parking vehicle by using the front camera 11 has been disclosed.” [0246], “As a device for detecting the street parking vehicle, LiDAR or sonar may be used. The devices are included in the surrounding monitoring sensors. The millimeter wave radar, the LiDAR, or the sonar can be called a distance measuring sensor. The map cooperation device 50 may be configured to detect the street parking vehicle by jointly using multiple types of the surrounding monitoring sensors. For example, the map cooperation device 50 may detect the street parking vehicle by sensor fusion “, both images and sensor data are used to determine the existence of street parking vehicle, [0266], “The street parking presence-absence determination unit F51 may calculate a possibility that the street parking vehicle may actually exist, as detection reliability, based on a combination of whether the existence is detected by the front camera 11, whether the existence is detected by the millimeter wave radar 12, and whether the avoidance action is performed. For example, as illustrated in FIG. 22, a configuration may be adopted as follows. As viewpoints (sensors or behaviors) indicating the existence of the street parking vehicle increase, the possibility is calculated to have higher detection reliability. An aspect of determining the detection reliability illustrated in FIG. 22 is an example, and can be changed as appropriate.”, [0182], “When the street parking point report includes the camera image, the disappearance determination unit G32 may analyze the camera image to determine whether the street parking vehicle still exists. The disappearance determination unit G32 may statistically process an analysis result of the image data transmitted from multiple vehicles to determine whether the street parking vehicle still exists. The statistical processing here includes majority voting or averaging”, the images from each of the multiple vehicle are compares whether the street parking vehicle existence changed).
Liu and Horihata are both considered to be analogous to the claimed invention because they are in the same field of vehicle surrounding detection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Liu to incorporate the teachings of Horihata wherein the first analysis method includes a comparison of respective surroundings sensor data with reference surroundings sensor data to detect a change in the surroundings of the motor vehicle, wherein it is determined that an object is located in the surroundings when a change has been detected. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to more accurately verify whether the street parking vehicle still exists or has disappeared (Horihata, [0127]).
Claim 17 is rejected for similar reasons as those described in claim 1. The additional elements in Claim 17 (Liu and Horihata) discloses includes: a device (Liu, Section IV, “computer”) configured to analyze surroundings of a motor vehicle (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space). The proposed combination as well as the motivation for combining the Liu and Horihata references presented in the rejection of Claim 11, apply to Claim 17 and are incorporated herein by reference. Thus, the device recited in Claim 17 is met by Liu and Horihata.
Claim 18 is rejected for similar reasons as those described in claim 1. The additional elements in Claim 18 (Liu and Horihata) discloses includes: a system (Liu, Fig. 2) configured to analyze surroundings of a motor vehicle (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space), comprising: a plurality of surroundings sensors (Liu, Fig. 4, vehicle has different sensors such as temperature sensor and image sensor), each of the surroundings sensors being configured to acquire information about the surroundings of the motor vehicle (Liu, Fig. 4, vehicle has different sensors such as temperature sensor and image sensor, Abstract, “There are mainly four categories of car parking management systems: counter-based, wired-sensor-based, wireless-sensor based, and image-based”); and a device (Liu, Section IV, “computer”). The proposed combination as well as the motivation for combining the Liu and Horihata references presented in the rejection of Claim 11, apply to Claim 18 and are incorporated herein by reference. Thus, the system recited in Claim 18 is met by Liu and Horihata.
Claim 19 is rejected for similar reasons as those described in claim 1. The additional elements in Claim 17 (Liu and Horihata) discloses includes: a non-transitory machine-readable storage medium (Liu, Section IV, “computer”, computer consists of a storage and processor) on which is stored a computer program (Liu, Fig. 2) for analyzing surroundings of a motor vehicle (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space), the computer program, when execute by a computer (Liu, Section IV, “computer”). The proposed combination as well as the motivation for combining the Liu and Horihata references presented in the rejection of Claim 19, apply to Claim 19 and are incorporated herein by reference. Thus, the medium recited in Claim 18 is met by Liu and Horihata.
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Horihata in further view of Nieto et al., "Automatic Vacant Parking Places Management System Using Multicamera Vehicle Detection" (March 2019), hereinafter referred to as Nieto.
Claim 14
The combination of Liu in view of Horihata discloses the method according to claim 11 (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space).
The combination of Liu in view of Horihata does not explicitly disclose wherein each of the multiple analysis of the surroundings is carried out using surroundings sensor data from surroundings sensors that acquire information about the surroundings of the motor vehicle under different framework conditions.
However, Nieto teaches wherein each of the multiple analysis of the surroundings is carried out using surroundings sensor data from surroundings sensors (Nieto, Fig. 1, each camera are considered different sensors, each camera image are processed individually and has their own result, Section III.A, “The proposed multicamera system is based on a parallel processing of each camera followed by the combination (or fusion) of their individual results”, in Section III.F, equation 5 and 6 shows the combination of the individual results regarding the occupancy of the parking space to come up with the final output) that acquire information about the surroundings of the motor vehicle under different framework conditions (Nieto, as seen in Fig. 9, each camera has different angle view and illuminations which means they all are under different framework conditions”).
Liu, Horihata, and Nieto are all considered to be analogous to the claimed invention because they are in the same field of vehicle surrounding detection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Liu and Horihata to incorporate the teachings of Nieto wherein each of the multiple analysis of the surroundings is carried out using surroundings sensor data from surroundings sensors that acquire information about the surroundings of the motor vehicle under different framework conditions. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have because the proposed system works correctly in challenging scenarios including almost total occlusions, illumination changes, and different weather conditions (Nieto, Abstract).
Claim 15
The combination of Liu in view of Horihata in further view of Nieto discloses the method according to claim 14 (Liu, Fig. 2, analyzed the surrounding of the vehicle in the input image shown in Fig. 3, the surrounding being the parking space), wherein the framework conditions include one or more elements of the following group of framework conditions: a respective position of the surroundings sensors, a respective viewing angle of the surroundings sensors, light conditions (Nieto, as seen in Fig. 9, each camera has different angle view and illuminations which means they all are under different framework conditions”).
The proposed combination as well as the motivation for combining the Liu, Horihata, and Nieto references presented in the rejection of Claim 14, apply to Claim 15 and are incorporated herein by reference. Thus, the method recited in Claim 15 is met by Liu, Horihata, and Nieto.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENISE G ALFONSO whose telephone number is (571)272-1360. The examiner can normally be reached Monday - Friday 7:30 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENISE G ALFONSO/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662