DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18519362, the instant application, filed on 27 November 2023.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “parking space detection module” and “gradient estimation module” in claim 1.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
As computer-implemented means-plus-function limitations, “The corresponding structure is not simply a general purpose computer by itself but the special purpose computer as programmed to perform the disclosed algorithm. Aristocrat, 521 F.3d at 1333, 86 USPQ2d at 1239. Thus, the specification must sufficiently disclose an algorithm to transform a general purpose microprocessor to the special purpose computer.” MPEP 2181.II.B. Accordingly, the above limitations are interpreted as follows:
Parking space detection module
Structure: A processor with instructions, see [0072] and [00114].
Algorithm: Receive an image of a parking space from a camera [0056]; Detect a parking space in the image [0056]; Recognize the type of parking space based on the image [0057]; Detect key points of the parking space in the image [0057]; Recognize the location of the parking space based on the key points [0057]; Wherein detecting the key points comprises detecting two start points from the entrance line of the parking space in the image and detecting two end points from the end line of the parking space, recognizing the two start point and the two end points as key points, and recognize the location of the parking space using the detected key points [0058].
Gradient estimation module
Structure: A processor with instructions, see [0072] and [00114].
Algorithm: Obtain angle information formed by key points in the type of the parking space recognized by the parking space detection module [0067]; Estimate a rotation matrix based on preset estimation conditions based on the angle information and the type of parking space [0067]; Estimate the gradient of the parking space based on the estimated rotation matrix and a predetermined reference rotation matrix [0067]; Wherein the reference rotation matrix is a rotation matrix for a case where there is no gradient of the road surface [0068]; Wherein estimating the gradient of the parking space based on the estimated rotation matrix and a predetermined reference rotation matrix comprises estimating the gradient of the parking space based on a difference between the estimated rotation matrix and the predetermined reference rotation matrix [0068].
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-13 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara (US 20200290600 A1) in view of Lee (US 20180025508 A1).
Regarding claim 11, Kuwahara discloses:
A method for estimating a parking space gradient, the method comprising: recognizing, by a processor ([0051] a processor executes a program which is understood as instructions), a type of a parking space ([0114]-[0115] the type of parking space is determined) based on an image captured by an image capturing device ([0042] a camera captures images) and recognizing a location of the parking space by detecting a plurality of key points corresponding to the parking space ([0108] marking lines are detected. [0077] and Fig. 7 a marking line 151 has endpoints S1 and T1 and marking line 152 has endpoints S2 and T2 which are understood as keypoints. Therefore, marking lines are understood to include keypoints. Fig. 6, the marking lines are shown in relation to the vehicle which is understood as a location);
obtaining, by the processor, angle information formed by the key points in the type of the parking space ([0113] "calculates the angle θ obtained between the computed parking baseline Z and the plurality of marking lines". As the applicant shows the key points in the corners of the parking space in Fig. 2, angles between the entrance line and marking lines are understood as angles formed by the key points);
Kuwahara does not disclose expressly estimating a rotation matrix based on estimation conditions and estimating a gradients of the parking space using the estimate rotation matrix and a predetermined reference rotation matrix.
Lee discloses:
estimating, by the processor, a rotation matrix ([0059] a rotation matrix is determined) based on estimation conditions previously set according to the angle information and the type of the parking space ([0061] the rotation matrix is estimated based on detected feature point patterns, i.e. features of the road such as lines, see [0039]. [0035] and Fig. 2B, correcting the view is shown. In order to convert the image view from the left side of Fig. 2B to the right side of Fig. 2B, a final desired or expected shape must be known. This expected shape, the right side of Fig. 2B, is understood as being indicative of the type of space that is previously set according to angle information (see how the angles are all right angles). Notice also how Fig. 2B discloses a conversion due to type similar to the applicant's Fig. 3B);
and estimating, by the processor, a gradient of the parking space ([0072] a slope of the ground surface is estimated. The slope is understood as a gradient of the parking space) using the estimated rotation matrix and a predetermined reference rotation matrix ([0073] the slope of the ground is estimated based on the attitude angle of the camera, i.e. a predetermined rotation matrix, and a rotation matrix of the camera against the ground, i.e. the estimated rotation matrix).
Kuwahara and Lee are combinable because they are from the same field of endeavor of a view monitoring system during parking (Kuwahara, [0002] and [0004] ; Lee, [0008]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the gradient estimation by rotation estimation as disclosed by Lee with the invention of Kuwahara.
The motivation for doing so would have been "When the attitude angle of the camera is estimated in this way, it is possible to process the acquired image to generate an around view (S970) and obtain a more accurate around view image in consideration of the ground slope" (Lee [0077]).
Therefore, it would have been obvious to combine Lee with Kuwahara to obtain the invention as specified in claim 11.
Regarding claim 12, Kuwahara in view of Lee discloses the subject matter of claim 11.
Kuwahara further discloses:
The method of claim 11, wherein the recognizing of the location of the parking space includes to detecting two start points from an entrance line of the parking space ([0077] and Fig. 7, S1 and S2 are understood as two start points at the entrance line) and two end points from an end line of the parking space as the key points ([0077] and Fig. 7, T1 and T2 are understood as two end points at the end line), and recognizing the location of the parking space using the key points including the two start points and the two end points (Fig. 6, the marking lines, which comprise key points as shown above, are shown in relation to the vehicle which is understood as a location).
Regarding claim 13, Kuwahara in view of Lee discloses the subject matter of claim 12.
Kuwahara further discloses:
The method of claim 12, wherein the obtaining of the angle information includes obtaining a first angle information formed by a first parking line connecting a first start point among the two start points and a first end point among the two end points and the entrance line ([0114] and Fig. 9, calculate the angle theta between the parking baseline, understood as the entrance line, and the mark line. Theta associated with marking line 151 is understood as a first angle) and obtaining a second angle information formed by a second parking line connecting a second start point among the two start points and a second end point among the two end points and the entrance line ([0114] and Fig. 9, calculate the angle theta between the parking baseline, understood as the entrance line, and the mark line. Theta associated with marking line 152 is understood as a second angle).
Regarding claim 17, Kuwahara in view of Lee discloses the subject matter of claim 11.
Kuwahara further discloses:
The method of claim 11, wherein the recognizing of the location of the parking space includes recognizing the type of the parking space by recognizing the parking space through restoring at least a portion of a parking line that has been lost, and in a situation that the at least a portion of the parking line of the parking space is lost ([0083] and Fig. 7, the lines P and Q are extended virtually which is understood as restoring a portion of a parking line).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara (US 20200290600 A1) in view of Lee (US 20180025508 A1) in further view of Murayama (EP 4202865 A1).
Regarding claim 18, Kuwahara in view of Lee discloses the subject matter of claim 11.
Kuwahara in view of Lee does not disclose expressly recognizing the location of the parking space by recognizing the virtual parking space using surrounding information including information on a parked vehicle.
Murayama discloses:
The method of claim 11, wherein the recognizing of the location of the parking space includes recognizing a type of a virtual parking space ([0039] the parking type is determined) by recognizing the virtual parking space using surrounding information including information on a parked vehicle ([0039] and Fig. 2, the type of parking is determined based on other vehicles).
Murayama is combinable with Kuwahara in view of Lee because it is in the same field of endeavor of parking assistance (Murayama, [0001]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the parking space recognition based on parked vehicles of Murayama with the invention of Kuwahara in view of Lee.
The motivation for doing so would have been "In this way, for each of the plurality of regions RN, the type determining portion 135 determines, as the parking type PK, the parking type PK with the greatest number of vehicles for each of the plurality of regions RN, enabling the parking type PK to be determined correctly for each of the plurality of regions RN" (Murayama, [0077]).
Therefore, it would have been obvious to combine Murayama with Kuwahara in view of Lee to obtain the invention as specified in claim 18.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara (US 20200290600 A1) in view of Lee (US 20180025508 A1) in further view of Murayama (EP 4202865 A1) and Ko Hyang Gu (KR 20140079974 A; hereafter, Ko).
Regarding claim 19, Kuwahara in view of Lee in further view of Murayama discloses the subject matter of claim of claim 18.
Kuwahara in view of Lee in further view of Murayama does not disclose expressly that recognizing the location of the parking space includes recognizing the parking space by considering a location of a parking stopper and a location of the parked vehicle.
Ko discloses:
The method of claim 18, wherein the recognizing of the location of the parking space includes recognizing the virtual parking space by considering a location of a parking stopper (pg. 2 para. 9, the parking space is described as the space between two parked vehicles along a curb. A curb is understood as a parking stopper. Pg. 4 para. 9, the starting point of the curb is detected which is understood as the location of the parking stopper) and a location of the parked vehicle, in a situation that the parking stopper is detected (pg. 2 para. 9, the parking space is described as the space between two parked vehicles along a curb, therefore it is based on the location of a parked vehicle when a parking stopper is detected).
Ko is combinable with Kuwahara in view of Lee in further view of Murayama because it is from the related field of endeavor of detecting a parallel parking space (Ko, pg. 2 para. 4).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the detecting of a parking space by considering a parking stopper and a vehicle of Ko with the invention of Kuwahara in view of Lee in further view of Murayama.
The motivation for doing so would have been "It is possible to reduce the risk of collision with a parked vehicle or a curb, and to perform stable parking" (Ko, pg. 4 para. 11).
Therefore, it would have been obvious to combine Ko with Kuwahara in view of Lee in further view of Murayama to obtain the invention as specified in claim 18.
Claims 1-3, 7, 10, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara (US 20200290600 A1) in view of Lee (US 20180025508 A1) in further view of Wang et al. (CN 114663529 A; hereafter, Wang).
Regarding claim 1, claim is interpreted as invoking 35 U.S.C. 112(f). Limitations which are interpreted into claim 1 due to the interpretation under 35 U.S.C. 112(f) are below in italic text. Kuwahara discloses:
An apparatus ([0036] and Fig. 1, item 10 is a parking assistance device which is understood as an apparatus) for estimating a parking space gradient, the apparatus comprising: a parking space detection module A processor with instructions ([0051] a processor executes a program which is understood as instructions);
Receive an image of a parking space from a camera ([0042] a camera captures images);
Detect a parking space in the image ([0108] marking lines are detected which is understood as detecting a parking space);
Recognize the type of parking space based on the image ([0114]-[0115] the type of parking space is determined);
Detect key points of the parking space in the image ([0077] and Fig. 7 a marking line 151 has endpoints S1 and T1 and marking line 152 has endpoints S2 and T2 which are understood as keypoints);
Recognize the location of the parking space based on the key points (Fig. 6, the marking lines, which comprise key points as shown above, are shown in relation to the vehicle which is understood as a location);
Wherein detecting the key points comprises detecting two start points from the entrance line of the parking space in the image ([0077] and Fig. 7, S1 and S2 are understood as two start points at the entrance line);
and detecting two end points from the end line of the parking space ([0077] and Fig. 7, T1 and T2 are understood as two end points at the end line), recognizing the two start point and the two end points as key points ([0078] the process uses the points for computation which is understood as recognizing them as keypoints);
configured to recognize a type of a parking space ([0114]-[0115] the type of parking space is determined) based on an image captured by an image capturing device ([0042] a camera captures images) and recognize a location of the parking space by detecting a plurality of key points corresponding to the parking space ([0108] marking lines are detected. [0077] and Fig. 7 a marking line 151 has endpoints S1 and T1 and marking line 152 has endpoints S2 and T2 which are understood as keypoints. Therefore, marking lines are understood to include keypoints. Fig. 6, the marking lines are shown in relation to the vehicle which is understood as a location);
and a gradient estimation module Obtain angle information formed by key points in the type of the parking space recognized by the parking space detection module ([0113] "calculates the angle θ obtained between the computed parking baseline Z and the plurality of marking lines". As the applicant shows the key points in the corners of the parking space in Fig. 2, angles between the entrance line and marking lines are understood as angles formed by the key points);
configured to obtain angle information formed by the key points in the type of the parking space ([0113] "calculates the angle θ obtained between the computed parking baseline Z and the plurality of marking lines". As the applicant shows the key points in the corners of the parking space in Fig. 2, angles between the entrance line and marking lines are understood as angles formed by the key points);
Kuwahara does not disclose expressly estimating a rotation matrix based on estimation conditions and estimating a gradients of the parking space using the estimate rotation matrix and a predetermined reference rotation matrix.
Lee discloses:
a gradient estimation module A processor with instructions (claim 5, the method is performed by an apparatus including a processor);
Estimate a rotation matrix ([0059] a rotation matrix is determined) based on preset estimation conditions based on the angle information and the type of parking space ([0061] the rotation matrix is estimated based on detected feature point patterns, i.e. features of the road such as lines, see [0039]. [0035] and Fig. 2B, correcting the view is shown. In order to convert the image view from the left side of Fig. 2B to the right side of Fig. 2B, a final desired or expected shape must be known. This expected shape, the right side of Fig. 2B, is understood as being indicative of the type of space that is previously set according to angle information (see how the angles are all right angles). Notice also how Fig. 2B discloses a conversion due to type similar to the applicant's Fig. 3B);
Estimate the gradient of the parking space ([0072] a slope of the ground surface is estimated. The slope is understood as a gradient of the parking space) based on the estimated rotation matrix and a predetermined reference rotation matrix ([0073] the slope of the ground is estimated based on the attitude angle of the camera, i.e. a predetermined rotation matrix, and a rotation matrix of the camera against the ground, i.e. the estimated rotation matrix);
estimate a rotation matrix ([0059] a rotation matrix is determined) based on estimation conditions previously set according to the angle information and the type of the parking space ([0061] the rotation matrix is estimated based on detected feature point patterns, i.e. features of the road such as lines, see [0039]. [0035] and Fig. 2B, correcting the view is shown. In order to convert the image view from the left side of Fig. 2B to the right side of Fig. 2B, a final desired or expected shape must be known. This expected shape, the right side of Fig. 2B, is understood as being indicative of the type of space that is previously set according to angle information (see how the angles are all right angles). Notice also how Fig. 2B discloses a conversion due to type similar to the applicant's Fig. 3B), and estimate a gradient of the parking space ([0072] a slope of the ground surface is estimated. The slope is understood as a gradient of the parking space) using the estimated rotation matrix and a predetermined reference rotation matrix ([0073] the slope of the ground is estimated based on the attitude angle of the camera, i.e. a predetermined rotation matrix, and a rotation matrix of the camera against the ground, i.e. the estimated rotation matrix).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the gradient estimation by rotation estimation as disclosed by Lee with the invention of Kuwahara.
The motivation for doing so would have been "When the attitude angle of the camera is estimated in this way, it is possible to process the acquired image to generate an around view (S970) and obtain a more accurate around view image in consideration of the ground slope" (Lee [0077]).
Therefore, it would have been obvious to combine Lee with Kuwahara.
Kuwahara in view of Lee does not disclose expressly that the reference rotation matrix is for a case where there is no gradient of the road surface and that estimating the gradient is based on a difference between the estimated rotation matrix and the predetermined reference rotation matrix.
Wang discloses:
a gradient estimation module Wherein the reference rotation matrix is a rotation matrix for a case where there is no gradient of the road surface (pg. 10 para. 4, a second rotation matrix is between a bird’s eye view and the world coordinate system. Referring to the world coordinate system is understood as a case when there is no gradient);
Wherein estimating the gradient of the parking space based on the estimated rotation matrix and a predetermined reference rotation matrix comprises estimating the gradient of the parking space based on a difference between the estimated rotation matrix and the predetermined reference rotation matrix (for the purpose of examination, the examiner understands "difference between the estimated rotation matrix and the reference rotation matrix" to mean a comparison between the matrices. Pg. 10 para. 5, the pitch and yaw of the outer reference, understood as the gradient of the parking space, is determined based on the estimated rotation matrix and the reference rotation matrix which is understood as a comparison);
Wang is combinable with Kuwahara in view of Lee because it is from the same field of endeavor of automatic driving (Wang, pg. 2 para. 1).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the calculation of gradient based on the difference between the reference rotation matrix and the estimated rotation matrix of Wang with Kuwahara in view of Lee.
The motivation for doing so would have been to "simply and directly determine the pitching angle and yaw angle out of the reference" (Wang, pg. 10 para. 6).
Therefore, it would have been obvious to combine Wang with Kuwahara in view of Lee to obtain the invention as specified in claim 1.
Regarding claim 2, Kuwahara in view of Lee in further view of Wang discloses the subject matter of claim 1.
Kuwahara further discloses:
The apparatus of claim 1, wherein the parking space detection module is configured to detect two start points from an entrance line of the parking space ([0077] and Fig. 7, S1 and S2 are understood as two start points at the entrance line) and two end points from an end line of the parking space as the key points ([0077] and Fig. 7, T1 and T2 are understood as two end points at the end line), and recognizing the location of the parking space using the key points including the two start points and the two end points (Fig. 6, the marking lines, which comprise key points as shown above, are shown in relation to the vehicle which is understood as a location).
Regarding claim 3, Kuwahara in view of Lee in further view Wang discloses the subject matter of claim 2.
Kuwahara further discloses:
The apparatus of claim 2, wherein the gradient estimation module is configured to obtain a first angle information formed by a first parking line connecting a first start point among the two start points and a first end point among the two end points and the entrance line ([0114] and Fig. 9, calculate the angle theta between the parking baseline, understood as the entrance line, and the mark line. Theta associated with marking line 151 is understood as a first angle) and obtaining a second angle information formed by a second parking line connecting a second start point among the two start points and a second end point among the two end points and the entrance line ([0114] and Fig. 9, calculate the angle theta between the parking baseline, understood as the entrance line, and the mark line. Theta associated with marking line 152 is understood as a second angle).
Regarding claim 7, Kuwahara in view of Lee in further view of Wang discloses the subject matter of claim 1.
Kuwahara further discloses:
The apparatus of claim 1, wherein the parking space detection module is configured to recognize the type of the parking space by recognizing the parking space through restoring at least a portion of a parking line that has been lost, and in a situation that the at least a portion of the parking line of the parking space is lost ([0083] and Fig. 7, the lines P and Q are extended virtually which is understood as restoring a portion of a parking line).
Regarding claim 10, Kuwahara in view of Lee in further view of Wang discloses the subject matter of claim 1.
Kuwahara in view of Lee does not disclose expressly that the reference rotation matrix is for a case where there is no gradient of the road surface and that estimating the gradient is based on a difference between the estimated rotation matrix and the predetermined reference rotation matrix.
Wang discloses:
The apparatus of claim 1, wherein the reference rotation matrix is a rotation matrix for a case where there is no gradient of a road surface (pg. 10 para. 4, a second rotation matrix is between a bird’s eye view and the world coordinate system. Referring to the world coordinate system is understood as a case when there is no gradient), and wherein the gradient estimation module is configured for estimating the gradient of the parking space based on a difference between the estimated rotation matrix and the reference rotation matrix (for the purpose of examination, the examiner understands "difference between the estimated rotation matrix and the reference rotation matrix" to mean a comparison between the matrices. Pg. 10 para. 5, the pitch and yaw of the outer reference, understood as the gradient of the parking space, is determined based on the estimated rotation matrix and the reference rotation matrix which is understood as a comparison).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the calculation of gradient based on the difference between the reference rotation matrix and the estimated rotation matrix of Wang with Kuwahara in view of Lee.
The motivation for doing so would have been to "simply and directly determine the pitching angle and yaw angle out of the reference" (Wang, pg. 10 para. 6).
Therefore, it would have been obvious to combine Wang with Kuwahara in view of Lee to obtain the invention as specified in claim 10.
Regarding claim 20, Kuwahara in view of Lee discloses the subject matter of claim 11.
Kuwahara in view of Lee does not disclose expressly that the reference rotation matrix is for a case where there is no gradient of the road surface and that estimating the gradient is based on a difference between the estimated rotation matrix and the predetermined reference rotation matrix.
Wang discloses:
The method of claim 11, wherein the reference rotation matrix is a rotation matrix for a case where there is no gradient of a road surface (pg. 10 para. 4, a second rotation matrix is between a bird’s eye view and the world coordinate system. Referring to the world coordinate system is understood as a case when there is no gradient), and wherein the estimating of the gradient of the parking space includes estimating the gradient of the parking space based on a difference between the estimated rotation matrix and the reference rotation matrix (for the purpose of examination, the examiner understands "difference between the estimated rotation matrix and the reference rotation matrix" to mean a comparison between the matrices. Pg. 10 para. 5, the pitch and yaw of the outer reference, understood as the gradient of the parking space, is determined based on the estimated rotation matrix and the reference rotation matrix which is understood as a comparison).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the calculation of gradient based on the difference between the reference rotation matrix and the estimated rotation matrix of Wang with Kuwahara in view of Lee.
The motivation for doing so would have been to "simply and directly determine the pitching angle and yaw angle out of the reference" (Wang, pg. 10 para. 6).
Therefore, it would have been obvious to combine Wang with Kuwahara in view of Lee to obtain the invention as specified in claim 20.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara (US 20200290600 A1) in view of Lee (US 20180025508 A1) in further view of Wang et al. (CN 114663529 A; hereafter, Wang) and of Murayama (EP 4202865 A1).
Regarding claim 8, Kuwahara in view of Lee in further view of Wang discloses the subject matter of claim 1.
Kuwahara in view of Lee does not disclose expressly recognizing the location of the parking space by recognizing the virtual parking space using surrounding information including information on a parked vehicle.
Murayama discloses:
The apparatus of claim 1, wherein the parking space detection module is configured to recognize a type of a virtual parking space ([0039] the parking type is determined) by recognizing the virtual parking space using surrounding information including information on a parked vehicle ([0039] and Fig. 2, the type of parking is determined based on other vehicles).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the parking space recognition based on parked vehicles of Murayama with the invention of Kuwahara in view of Lee in further view of Wang.
The motivation for doing so would have been "In this way, for each of the plurality of regions RN, the type determining portion 135 determines, as the parking type PK, the parking type PK with the greatest number of vehicles for each of the plurality of regions RN, enabling the parking type PK to be determined correctly for each of the plurality of regions RN" (Murayama, [0077]).
Therefore, it would have been obvious to combine Murayama with Kuwahara in view of Lee in further view of Wang to obtain the invention as specified in claim 8.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara (US 20200290600 A1) in view of Lee (US 20180025508 A1) in further view of Wang et al. (CN 114663529 A; hereafter, Wang) and of Murayama (EP 4202865 A1) and of Ko Hyang Gu (KR 20140079974 A; hereafter, Ko).
Regarding claim 9, Kuwahara in view of Lee in further view of Wang and of Murayama discloses the subject matter of claim of claim 8.
Kuwahara in view of Lee in further view of Wang and of Murayama does not disclose expressly that recognizing the location of the parking space includes recognizing the parking space by considering a location of a parking stopper and a location of the parked vehicle.
Ko discloses:
The apparatus of claim 8, wherein the parking space detection module is configured to recognize the virtual parking space by considering a location of a parking stopper (pg. 2 para. 9, the parking space is described as the space between two parked vehicles along a curb. A curb is understood as a parking stopper. Pg. 4 para. 9, the starting point of the curb is detected which is understood as the location of the parking stopper) and a location of the parked vehicle, in a situation that the parking stopper is detected (pg. 2 para. 9, the parking space is described as the space between two parked vehicles along a curb, therefore it is based on the location of a parked vehicle when a parking stopper is detected).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the detecting of a parking space by considering a parking stopper and a vehicle of Ko with the invention of Kuwahara in view of Lee in further view of Wang and of Murayama.
The motivation for doing so would have been "It is possible to reduce the risk of collision with a parked vehicle or a curb, and to perform stable parking" (Ko, pg. 4 para. 11).
Therefore, it would have been obvious to combine Ko with Kuwahara in view of Lee in further view of Wang and of Murayama to obtain the invention as specified in claim 8.
Allowable Subject Matter
Claims 4-6 and 14-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 4, the closest prior art, Kuwahara (US 20200290600 A1), discloses that the first and second angle information converge to a right angle in the situation that the type of the parking space is orthogonal or parallel or the first and second angle information are obtuse angle and acute angle or acute angle and obtuse angle. Lee (US 20180025508 A1) discloses estimating a rotation matrix. The closest prior art does not disclose or reasonably suggest estimating a rotation matrix so that a difference between the first angle and the second angle converges to a preset minimum value range.
The claim as a whole is found non-obvious over the prior art, including:
wherein the gradient estimation module is configured for estimating the rotation matrix so that a difference between the first angle information and the second angle information converges to a preset minimum value range
Regarding claim 5, the closest prior art, Kuwahara (US 20200290600 A1), discloses that the first angle information and the second angle information are in a preset angle range in a situation that the type of the parking space is oblique or the first angle information and the second angle information are acute angle and acute angle, or obtuse angle and obtuse angle. . Lee (US 20180025508 A1) discloses estimating a rotation matrix. The closest prior art does not disclose or reasonably suggest estimating a rotation matrix so that a difference between the first angle and the second angle converges to a preset minimum value range.
The claim as a whole is found non-obvious over the prior art, including:
wherein the gradient estimation module is configured for estimating the rotation matrix so that a difference between the first angle information and the second angle information converges to a preset minimum value range
Claim 6 is dependent on claim 5 and likewise contains allowable subject matter.
Regarding claim 14, the closest prior art, Kuwahara (US 20200290600 A1), discloses that the first and second angle information converge to a right angle in the situation that the type of the parking space is orthogonal or parallel or the first and second angle information are obtuse angle and acute angle or acute angle and obtuse angle. Lee (US 20180025508 A1) discloses estimating a rotation matrix. The closest prior art does not disclose or reasonably suggest estimating a rotation matrix so that a difference between the first angle and the second angle converges to a preset minimum value range.
The claim as a whole is found non-obvious over the prior art, including:
wherein the estimating of the rotation matrix includes estimating the rotation matrix so that a difference between the first angle information and the second angle information converges to a preset minimum value range
Regarding claim 15, the closest prior art, Kuwahara (US 20200290600 A1), discloses that the first angle information and the second angle information are in a preset angle range in a situation that the type of the parking space is oblique or the first angle information and the second angle information are acute angle and acute angle, or obtuse angle and obtuse angle. . Lee (US 20180025508 A1) discloses estimating a rotation matrix. The closest prior art does not disclose or reasonably suggest estimating a rotation matrix so that a difference between the first angle and the second angle converges to a preset minimum value range.
The claim as a whole is found non-obvious over the prior art, including:
wherein the estimating of the rotation matrix includes estimating the rotation matrix so that a difference between the first angle information and the second angle information converges to a preset minimum value range
Claim 16 is dependent on claim 15 and likewise contains allowable subject matter.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20180365858 A1, Kim et al., discloses a system which determines a rotation matrix of a road based on straight and parallel lines on the road.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA B. CROCKETT/Examiner, Art Unit 2661
/JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661