DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed 23 October 2024.
Claims 1-20 are currently pending and have been examined.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 23 January 2025 and 15 May 2025 have been considered by the examiner and initialed copies of the IDSs are hereby attached.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: element 272 as shown in Figure 26. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 19 is objected to because of the following informalities:
Claim 19 is a method claim however the claim includes system limitations. “a density restrictor that restricts a density” rather than claiming a method step. The examiner recommends amending claim 19 to claim a method step. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an image acquirer that acquires a camera image” as recited in claim 1
“a feature point detector that extracts feature points” as recited in claim 1
“a feature point selector that selects a feature point” as recited in claim 1
“a density restrictor that restricts a density of the feature points” as recited in claim 9 and claim 19
Structural support for these elements can be found in [0028] , [0053] and Figure 4 and 5.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The claims are generally narrative and indefinite, failing to conform with current U.S. practice. They appear to be a literal translation into English from a foreign document and are replete with grammatical and idiomatic errors.
Claim 1 recites “each of cameras”. There is insufficient antecedent basis for this limitation in the claim. The examiner notes that while this appears to be a typographical error which should recite “each of the cameras”, there is no antecedent basis for cameras. The examiner recommends positively reciting a plurality of the cameras and acquiring an image from each of the plurality of cameras. Claims 11 and 20 have a similar recitation and are rejected for the same reason.
Claim 2 recites the limitation "a feature point " in line 2. Claim 2 depends from claim 1 which previously recited “a feature point in line 5. It is not clear if the feature point in claim 2 is the same or different than that recited in claim 1. The examiner notes that claim 2 has multiple instances of introducing “a feature point”. It is not clear if the feature point in line 2, line 3, and line 4 are the same or different feature points. The examiner notes that claims 3-10, and 12-18 have similar recitations and are rejected for the same reasons.
Claim 2 recites “difference in height is small” and “difference in height is large”. The term “small”” in claim 2 is a relative term which renders the claim indefinite. The term “small” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear what would be considered a small angle. The examiner note that the term “large” is similarly a relative term and indefinite. Claim 12 has similar limitations and is rejected for the same reason.
Claim 3 recites “angle [is] small” and “angle [is] large”. The term “small”” in claim 3 is a relative term which renders the claim indefinite. The term “small” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear what would be considered a small angle. The examiner note that the term “large”. is similarly a relative term and indefinite. Claims 10, 13 has similar limitations and is rejected for the same reason.
Claim 4 recites “wherein the feature point selector evaluates a feature point based on a distance between the position on the parking route and the feature point, and registers, on the map, a feature point at which the distance is long in a case where the position on the parking route is a position at which the priority for registration of the feature point is low or a position at which the number of feature points to be registered is small with priority over in a case where the position on the parking route is a position at which the priority for registration of the feature point is high or a position at which the number of feature points to be registered is large”. The limitation is wholly unclear. The claim as written appears to indicate that the low priority registration is prioritized over the high priority registration. For example, as written the claim indicates the feature point that is a long distance away is prioritized when the priority of the feature point is low or if there is a small number of feature points to be registered. However, this appears to contradict the specification at [0170]. The examiner notes the specification at [0170] states “Since the position with a high priority and a large number of points to be registered is a position where the assignment point is high, it may be said that, in a case where the assignment point is high, the close feature point is prioritized, and in a case where the assignment point is low, the distant feature point is prioritized. In this way, at a position where the assignment point is low, the distant feature points are preferentially registered, so that the position accuracy in the section can be secured with a small number of the feature points”. Claim 14 has similar limitations and is rejected for the same reason.
Claim 4 recites “the number of feature points [is] small” and “the number of feature points [is] large”. The term “small”” in claim 4 is a relative term which renders the claim indefinite. The term “small” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear what would be considered a small number. The examiner note that the term “the number of feature points [is] large”. is similarly a relative term and indefinite. Claims 9, 14, and 19 have similar limitations and are rejected for the same reason.
Claim 7 recites “the vehicle speed is low” and “the vehicle speed is high”. The term “speed is low”” in claim 7 is a relative term which renders the claim indefinite. The term “speed is low” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear what would be considered a low speed. The examiner note that the term “the vehicle speed is high” is similarly a relative term and indefinite. Claim 17 has similar limitations and is rejected for the same reason.
Claim 7 recites “distance is short” and “distance is long”. The term “distance is short” in claim 7 is a relative term which renders the claim indefinite. The term “distance is short” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear what would be considered a short route length. The examiner note that the term “distance is long” is similarly a relative term and indefinite. Claims 10 and 17 have similar limitations and are rejected for the same reason.
Claim 8 recites “route length is short” and “route length is long”. The term “route length is short” in claim 8 is a relative term which renders the claim indefinite. The term “route length is short” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear what would be considered a short route length. The examiner note that the term “route length is long” is similarly a relative term and indefinite. Claim 18 has similar limitations and is rejected for the same reason.
Claim 10 recites “the same distance” in line 3. There is insufficient antecedent basis for this limitation.
Claim 10 recites “the distance”. There is insufficient antecedent basis for this limitation.
Claim 10 recite “an optical axis” in line 5. Claim 10 depends from claim 1 which previously recited an optical axis in line 14. It is not clear if the optical axis of claim 10 is the same or different than that recited in claim 1.
Claims 2-10 depend from claim 1 and are similarly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based on their dependency on claim 1.
Claims 12-19 depend from claim 11 and are similarly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based on their dependency on claim 11.
The claims are replete with indefiniteness issues. While the examiner has attempted to identify all of them, any newly identified 112(a)/(b) rejections will not be considered a new basis of rejection because the Applicant initial submission has significant errors which cause the claims to be difficult to examine.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 4, 5, 7-9, 11, 14, 15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamaguchi (JP-2018075866-A, hereinafter “Yamaguchi”) in view of Grimm (US-20130085637-A1, hereinafter “Grimm”).
Regarding claim 1, Yamaguchi discloses a parking assistance apparatus, comprising:
an image acquirer that acquires a camera image from each of [[cameras]] for viewing in different directions respectively around a vehicle (see at least Yamaguchi, Figure 11, control unit 15 obtains images from camera 11. See also [0010] “As shown in FIG. 1, the attitude estimation device 100 according to the present embodiment includes a camera 11 (ambient detection unit) for imaging the surrounding environment of a vehicle, feature points included in three-dimensional data and three-dimensional data around the vehicle,”) ;
a feature point detector that extracts feature points from the camera image (see at least Yamaguchi, Figure 1, feature detection unit 152, See also [0012] “The control unit 15 includes a feature point / feature selection unit 151 that selects a feature point or a feature included in the environment map 12 based on a condition to be described later with reference to the environment map 12. Furthermore, it is equipped with a feature point / feature detection unit 152 for detecting feature points or features around the vehicle from the image captured by the camera 11”).
a feature point selector that selects a feature point to be registered on a map by evaluating the feature points [[ in learning travel in which the vehicle is manually parked and a parking route and a parking position of the vehicle are registered on the map]] (see at least Yamaguchi, Figure 1, feature selection unit 151. See also [0012] “The control unit 15 includes a feature point / feature selection unit 151 that selects a feature point or a feature included in the environment map 12 based on a condition to be described later with reference to the environment map 12. …” The examiner notes that [0028] teaches that the information on the feature points are stored to be used for parking the vehicle from the next time onward which suggests that the selection of features are during training as it is stored and used for future driving.).
a vehicle controller that parks the vehicle based on the map in automatic parking (see at least Yamaguchi, Figure 1, control unit 15 and [0035] “Next, in step S22 of FIG. 2, the control unit 15 controls the vehicle to travel along the target route L1 (see FIG. 3), and moves the vehicle V1 to the parking target position 21.”)
wherein
the feature point selector varies a priority for registration of the feature points or the number of feature points for registration in accordance with a position on the parking route, or selects, at the position on the parking route, the feature point to be registered on the map, based on a position of the camera or a relative position of the feature point with respect to an optical axis direction of the camera (see at least Yamaguchi, [0014] [0037-0038] “The posture of the host vehicle on the environment map 12 is estimated by detecting the distance to and angle from the feature point. Generally, as the distance between the vehicle and the feature point increases, the posture of the vehicle becomes easier to estimate. For example, when there is a feature point in front of the vehicle, if the angle of the attitude of the vehicle changes by 1 degree, the position of the feature point is detected to be greatly shifted as the distance between the vehicle and the feature point is larger . On the contrary, when the distance between the vehicle and the feature point is small, even if the angle of the attitude of the vehicle changes by 1 degree, the deviation of the position of the feature point is small and it is difficult to detect. In this way, when the distance between the vehicle V 1 and the feature point is small, it is difficult to estimate the posture of the vehicle V 1. Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…Therefore, as the distance from the parking target position is smaller, the accuracy of the attitude of the vehicle V 1 is required”. The examiner notes that storing an increased number of feature points based on the distance indicates a priority for those points and further Yamaguchi teaches the feature points in the shorter distances are required for high accuracy during parking).
Yamaguchi does not teach explicitly teach that there are a plurality of cameras and acquiring a camera image from each of the cameras for viewing in different directions respectively around a vehicle nor does Yamaguchi explicitly teach evaluating the feature points in learning travel in which the vehicle is manually parked and a parking route and a parking position of the vehicle are registered on the map.
Grimm teaches parking assistance apparatus using reference features for assisting parking including a plurality of cameras and acquiring a camera image from each of the cameras for viewing in different directions respectively around a vehicle (see at least Grimm, Figure 1, cameras 10, 11, 12, an 13 and [0044] “The sensor device 7, i.e. both the ultrasound sensors 8, 9, 16, 17 and the cameras 10, 11, 12, 13, record data about the surroundings of the motor vehicle 1 and transfer the recorded data to the controller 3. This can process the recorded data and then assist the driver when parking depending on a result of the data processing.”) and evaluating the feature points in learning travel in which the vehicle is manually parked and a parking route and a parking position of the vehicle are registered on the map (see at least Grimm, Figure 1, cameras 10, 11, 12, an 13 and [0047] “The driver assistance device 2 is first changed into a learning mode. This can, for example, take place as a result of an input by the driver, i.e. by operating a control device, for example. In the learning mode a surrounding area 22 of the parking space 19 is learned by the driver assistance device 2. The surrounding area 22 of the parking space 19 also includes the lateral boundaries of the parking space 19, i.e. in the present case the side walls 20, 21. In the learning mode the motor vehicle 1 is manually parked by the driver in the garage 18 once. While doing so, the sensor device 7 records reference data about the surrounding area 22 starting from a reference starting position 23, from which the motor vehicle 1 is moved in the learning mode. These reference data are stored in the driver assistance device 2, namely in the memory 4 of the controller 3.” See also [0050]).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yamaguchi with the teaching of Grimm to utilize a plurality of cameras and to register feature points during the learning travel, with a reasonable expectation of success, because as Grimm teaches the additional cameras provide additional data regarding the surrounding environment and the additional data stored during the learning process can be used as a reference to recognize the parking space and other objects during the operation mode to assist with autonomously parking the vehicle (see at least Grimm [0008] [0061]).
Regarding claim 4, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, wherein the feature point selector evaluates a feature point based on a distance between the position on the parking route and the feature point, and registers, on the map, a feature point at which the distance is long in a case where the position on the parking route is a position at which the priority for registration of the feature point is low or a position at which the number of feature points to be registered is small with priority over in a case where the position on the parking route is a position at which the priority for registration of the feature point is high or a position at which the number of feature points to be registered is large (see at least Yamaguchi, [0014] [0037-0038] “The posture of the host vehicle on the environment map 12 is estimated by detecting the distance to and angle from the feature point. Generally, as the distance between the vehicle and the feature point increases, the posture of the vehicle becomes easier to estimate. For example, when there is a feature point in front of the vehicle, if the angle of the attitude of the vehicle changes by 1 degree, the position of the feature point is detected to be greatly shifted as the distance between the vehicle and the feature point is larger . On the contrary, when the distance between the vehicle and the feature point is small, even if the angle of the attitude of the vehicle changes by 1 degree, the deviation of the position of the feature point is small and it is difficult to detect. In this way, when the distance between the vehicle V 1 and the feature point is small, it is difficult to estimate the posture of the vehicle V 1. Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…Therefore, as the distance from the parking target position is smaller, the accuracy of the attitude of the vehicle V 1 is required”. The examiner notes that storing an increased number of feature points based on the distance indicates a priority for those points and further Yamaguchi teaches the feature points in the shorter distances are required for high accuracy during parking. Further, the examiner notes the 112 rejection and as best understood by the examiner, in light of the specification at a position where the assignment point is high the close feature points are prioritized (see instant specification [0170]).
Regarding claim 5, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, wherein the feature point selector registers a feature point in accordance with the position on the parking route, and registers a feature point to be registered at a terminal point with priority over a feature point to be registered in a section, or increases the number of feature points to be registered at the terminal point to be more than the number of feature points to be registered in the section (see at least Yamaguchi [0014] and [0037-0038] “Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…” The examiner interprets the terminal point as the parking position and Yamaguchi teaches that the number of feature points increases as the vehicle approaches the parking position.) .
Regarding claim 7, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, wherein the feature point selector registers a feature point based on vehicle speed information indicating a vehicle speed of the vehicle or distance information indicating a distance between the vehicle and an obstacle, and registers a feature point at a position at which the vehicle speed is low or deceleration is present with priority over at a position at which the vehicle speed is high and deceleration is absent, or registers a feature point at a position at which the distance is short with priority over at a position at which the distance is long (see at least Yamaguchi [0014] and [0037-0038] “Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…” The examiner notes that the vehicle of Yamaguchi decelerates to stop in the parking space. Further, the vehicle speed is low (at a decelerated speed) when in the vicinity of the target parking position as compared with areas outside the vicinity of the parking area and thus Yamaguchi teaches that the number of feature points increases at a low velocity.) .
Regarding claim 8, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, wherein the feature point selector evaluates a route length between the position on the parking route and the parking position, and registers a feature point to be registered at a position at which the route length is short with priority over a feature point to be registered at a position at which the route length is long, or registers more feature points at the position at which the route length is short than at the position at which the route length is long (see at least Yamaguchi [0014] and [0037-0038] “Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…” The examiner notes that the route distance is short as the vehicle approaches the target parking position. Yamaguchi teaches that increasing the feature points as the vehicle approaches the target parking position. The examiner notes that the route length is short as the vehicle approaches the target parking position. Thus, Yamaguchi teaches increasing the feature points when the route length is short as compared to the when the route length is long.) .
Regarding claim 9, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, wherein:
the feature point selector includes a density restrictor that restricts a density of the feature points to be registered, and
the density restrictor varies the density in accordance with the position on the parking route and increases the density at a position at which the priority for registration of the feature points is high or a position at which the number of feature points to be registered is large to be more than the density at a position at which the priority is low and the number of feature points to be registered is small (see at least Yamaguchi [0014] “Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…” See also [0037-0038] “In the above embodiment, the neighboring region R1 is set around the point (Q1 in FIG. 3) of the parking target position 21, and when there is a feature point outside the neighboring region R1, one feature point is selected , And when there are no feature points outside the neighboring region R 1, two feature points are selected inside the neighboring region R 1…. For example, it is also possible to divide the distance from the parking target position 21 into three stages, and to gradually increase the number of feature points to be selected as approaching the parking target position 21 in each area…[0038] …At this time, as the distance from the parking target position 21 is closer, more feature points are stored. Thus, when parking the vehicle V 1 to the parking target position 21 using the environment map, it is possible to use a plurality of feature points as the vehicle V 1 approaches the parking target position, so that the posture of the vehicle V 1 can be estimated with high accuracy , The vehicle can be parked at a correct position with respect to the parking target position 21.” The examiner notes that Yamaguchi teaches increasing the density of registered points as the vehicle approaches the parking target position, which is considered a high priority location and as Yamaguchi teaches requires high accuracy.) .
Claim 11 and claim 20 are rejected under the same rationale, mutatis mutandis, as claim 1, above (see at least Yamaguchi, control unit 15 [0017] “This control is executed by the control unit 15 shown in FIG. 1.” And Grimm, Figure 1 controller, [0042] “The driver assistance device 2 comprises a controller 3, which can comprise a memory 4, a digital signal processor 5 and a microcontroller 6.” See also [0061].).
Claim 14 is rejected under the same rationale, mutatis mutandis, as claim 4, above.
Claim 15 is rejected under the same rationale, mutatis mutandis, as claim 5, above.
Claim 17 is rejected under the same rationale, mutatis mutandis, as claim 7, above.
Claim 18 is rejected under the same rationale, mutatis mutandis, as claim 8, above.
Claim 19 is rejected under the same rationale, mutatis mutandis, as claim 9, above.
Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamaguchi and Grimm in view of Yamamoto (JP-2018111377-A, hereinafter “Yamamoto”).
Regarding claim 2, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, but do not explicitly disclose wherein the feature point selector evaluates a feature point based on a difference in height between the camera and the feature point, and registers, on the map, a feature point at which the difference in height is small with priority over a feature point at which the difference in height is large.
Yamamoto teaches wherein the feature point selector evaluates a feature point based on a difference in height between the camera and the feature point, and registers, on the map, a feature point at which the difference in height is small with priority over a feature point at which the difference in height is large (see at least Yamamoto [0059] “Further, in the order of priority described above, it is desirable that the object having a feature point with a small height difference from the optical axis of the left side camera 10 is set higher for objects located in the vicinity of the vehicle 1 . This is because there is a low possibility that the feature point will not be visible as the vehicle V 1 moves, as long as the feature point has a small height difference from the optical axis of the left side camera 10. For example, if it is a white line drawn on the road surface, it may be hidden by the own vehicle V 1 as the host vehicle V 1 moves. Increasing the ranking of feature points with small differences in height from the optical axis of the side camera 10 increases the possibility of tracking movement of feature points with high rank. Therefore, the detection accuracy of the opening degree change of the left side door is improved.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yamaguchi and Grimm with the teaching of Yamamoto to prioritize feature points with a small height difference, with a reasonable expectation of success, because as Yamamoto teaches this increase the possibility of tracking an important feature points (see Yamamoto [0059]).
Claim 12 is rejected under the same rationale, mutatis mutandis, as claim 2, above.
Claim(s) 3, 6, 13, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamaguchi and Grimm in view of Watanabe et al. (US-20200191975-A1,hereinafter “Watanabe”).
Regarding claim 3, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, but do not explicitly disclose wherein the feature point selector evaluates a feature point based on an angle of the feature point with respect to the optical axis direction of the camera, and registers, on the map, a feature point at which the angle is small with priority over a feature point at which the angle is large.
Watanabe teaches wherein the feature point selector evaluates a feature point based on an angle of the feature point with respect to the optical axis direction of the camera, and registers, on the map, a feature point at which the angle is small with priority over a feature point at which the angle is large (see at least Watanabe [0144] “Meanwhile, since the surrounding image is captured by using a fish-eye lens, the distortion of the image becomes smaller as approaching the center potion of the surrounding image, and the distortion of the image becomes larger as approaching the end portion of the surrounding image. Therefore, in the case of performing processing of checking feature points of the surrounding image against feature points of the key frame, the checking accuracy is high when using only feature points near the center portion as compared with the case of using feature points away from the center portion. As a result, the accuracy of self-position estimation by the image self-position estimation unit 231 is improved.” The examiner notes that the feature point that is close to the central portion or the optical axis is given priority over the feature point with a large angle that is away from the central portion or the optical axis.).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yamaguchi and Grimm with the teaching of Watanabe, with a reasonable expectation of success because as Watanabe teaches that feature points that are near the center of the optical axis will have higher accuracy and allows the accuracy of the estimation to be improved (Watanabe [0144]).
Regarding claim 6, the combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, including registers with priority, on the map, in any section of a turning section and a posture convergence section, a feature point in a tangent direction of the section or increases the number of feature points to be registered, the turning section being a section in which the steering angle is equal to or greater than a predetermined threshold value or a change amount of the posture angle in the section is equal to or greater than a predetermined threshold value, the posture convergence section being a section in which a difference between the posture angle of the vehicle and a posture angle of the vehicle at the parking position is equal to or less than a predetermined angle threshold value (see at least Yamaguchi Figure 3 as vehicle approaches the parking position in a turn the number of feature points increases. For example see [0014] and [0037-0038] “Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…” The examiner interprets the terminal point as the parking position and Yamaguchi teaches that the number of feature points increases as the vehicle approaches the parking position.) .
However the combination of Yamaguchi and Grimm but do not explicitly teach wherein the feature point selector evaluates a steering angle of the vehicle or a posture angle of the vehicle.
Watanabe teaches wherein the feature point selector evaluates a steering angle of the vehicle or a posture angle of the vehicle and registers with priority, on the map, in any section of a turning section and a posture convergence section, a feature point in a tangent direction of the section or increases the number of feature points to be registered, the turning section being a section in which the steering angle is equal to or greater than a predetermined threshold value or a change amount of the posture angle in the section is equal to or greater than a predetermined threshold value, the posture convergence section being a section in which a difference between the posture angle of the vehicle and a posture angle of the vehicle at the parking position is equal to or less than a predetermined angle threshold value (see at least Watanabe [0056-0057] “The vehicle state detection unit 143 performs processing of detecting the state of the vehicle 10 on the basis of the data or signal from the respective units of the vehicle control system 100. The state of the vehicle 10 to be detected includes, for example, speed, acceleration, steering angle, presence/absence and content of abnormality, the state of the driving operation, position and inclination of the power seat, the state of the door lock, the state of other on-vehicle devices, and the like. The vehicle state detection unit 143 supplies the data indicating the results of the detection processing to the situation recognition unit 153 of the situation analysis unit 133, and the emergency event avoidance unit 171 of the operation control unit 135, for example…. [0057] The self-position estimation unit 132 performs processing of estimating a position, a posture, and the like of the vehicle 10 on the basis of the data or signal from the respective units of the vehicle control system 100, such as the vehicle exterior information detection unit 141 and the situation recognition unit 153 of the situation analysis unit 133. Further, the self-position estimation unit 132 generates a local map (hereinafter, referred to as the self-position estimation map) to be used for estimating a self-position as necessary. The self-position estimation map is, for example, a high precision map using a technology such as SLAM (Simultaneous Localization and Mapping).” See also Figure 4 and [0090] “The key frame contains, for example, data indicating the position and feature amount in an image coordinate system of each feature point detected in the reference image, and data indicating the position and posture of the map generation vehicle in a map coordinate system at the time when the reference image is captured (i.e., position and posture at which the reference image is captured).”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yamaguchi and Grimm with the teaching of Watanabe, with a reasonable expectation of success, because as Watanabe teaches the vehicle position and posture can be used as a reference when “self-positioning” vehicle at a later time and results in improved accuracy in “self-positioning” (see at least Watanabe [0011])
Claim 13 is rejected under the same rationale, mutatis mutandis, as claim 3, above.
Claim 16 is rejected under the same rationale, mutatis mutandis, as claim 6, above.
Allowable Subject Matter
Claim 10 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The combination of Yamaguchi and Grimm teach the parking assistance apparatus according to claim 1, including wherein the feature point selector evaluates, with reference to a trajectory of each of left-right cameras, and registers with priority, on the map, a feature point at which the distance from the trajectory of the camera is short (see at least Yamaguchi [0014] and [0037-0038] “Therefore, in the present embodiment, in the vicinity of the parking target position (parking target position As the distance from the object is smaller, many feature points are stored. As a result, it becomes possible to refer to a plurality of feature points, so that the attitude of the vehicle can be estimated with high accuracy…” See also Grimm for the left and right cameras Fig. 1 cameras 12, 13.) .
However the combination does not teach wherein the feature point selector evaluates a feature point by dividing a space to obtain a region with annular rings having the same distance from the trajectory of the camera, registers with priority, on the map, a feature point at which a difference in angle with an optical axis of the camera is small in a range on one of the annular rings having the same distance from the trajectory of the camera, and registers with priority, on the map, a feature point at which the distance from the trajectory of the camera is short, for feature points at which the difference in angle with the optical axis of the camera is the same.
While Watanabe teaches that the an image is distorted as you approach a distance away from the center of the image which indicates an angle from the central portion of the optical axis, (see at least Watanabe [0144] “] Meanwhile, since the surrounding image is captured by using a fish-eye lens, the distortion of the image becomes smaller as approaching the center potion of the surrounding image, and the distortion of the image becomes larger as approaching the end portion of the surrounding image. Therefore, in the case of performing processing of checking feature points of the surrounding image against feature points of the key frame, the checking accuracy is high when using only feature points near the center portion as compared with the case of using feature points away from the center portion. As a result, the accuracy of self-position estimation by the image self-position estimation unit 231 is improved.” ), Watanabe does not explicitly teach wherein “the feature point selector evaluates a feature point by dividing a space to obtain a region with annular rings having the same distance from the trajectory of the camera, registers with priority, on the map, a feature point at which a difference in angle with an optical axis of the camera is small in a range on one of the annular rings having the same distance from the trajectory of the camera, and registers with priority, on the map, a feature point at which the distance from the trajectory of the camera is short, for feature points at which the difference in angle with the optical axis of the camera is the same” as required by claim 10
Further, the examiner cannot determine a reasonable motivation, either in the known prior art or the existing case law, to combine the known elements to render the claimed invention without the use of impermissible hindsight.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US-20250378583-A1 to Eguchi is cited for showing selecting image feature points in an image and equalizing the images due to parallax including a discussion of the angle of the optical axis [0085-0086].
US-20250018933-A1 to Watanabe is cited for showing selecting feature points during a learning portion of travel [0063] and Figure 2 and discusses the relative positioning based on the optical axis [0034].
US-20090243889-A1 to Suhr is cited for showing feature selection including determining a camera height and the height based on the optical axis [0011], [0081], [0089-0092].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER M. ANDA whose telephone number is (571)272-5042. The examiner can normally be reached Monday-Friday 8:30 am-5pm MST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNIFER M ANDA/Examiner, Art Unit 3662