Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to the applicant's communication filed on 12/18/2023. In virtue of this communication, claims 1-20 filed on 12/18/2023 are currently pending in the instant application.
Information Disclosure Statement
The information Disclosure statement (IDS) form PTO-1449, filed on 12/18/2023 are in compliance with the provisions of CFR 1.97. Accordingly, the information disclosed therein was considered by the examiner.
Drawings
The drawings were received on 12/18/2023 have been reviewed by Examiner and they are acceptable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5, 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2021/0215489).
As per claim 1, A computer comprising a processor and a memory, the memory storing instructions executable by the processor to:
“detect static environmental features in a camera image from a camera of a vehicle;”( Zhang ¶[0008] discloses receiving, by an autonomous vehicle, from an imaging system mounted on the vehicle, an image frame, the image frame depicting a portion of the local area surrounding the vehicle, and receiving an initial pose of the autonomous vehicle. ¶[0088] discloses detects edges from captures images (e.g., live camera feeds). ¶[0094] disclose FIG. 9 illustrates an example of identified edge points of a captured image . ¶[0120] discloses system detects edges within the processed portion of the images, and may compute line segments corresponding to the detected edges. In addition, the localization system computes an intensity gradient (which may include both magnitude and orientation) for pixels on the detected edges. In some embodiments, lane line detection (or other types of object detection) may be performed on the image to identify edges corresponding to lane lines (or other types of features).)
“generate a distance transform image of the static environmental features as detected in the camera image, in which pixel values of respective pixels in the distance transform image indicate respective pixel distances of the respective pixels from the static environmental features in the distance transform image;” (Zhang, ¶[0132] discloses the localization system identifies edges in captured images and generates an edge map. In some embodiments, the edge map corresponds to a binary image corresponding to at least a portion of a captured image, in which a value of 1 indicates the corresponding pixel of the captured image is on an identified edge, and a value of 0 indicates that the corresponding pixel is not on an identified edge. A distance transform is applied on the edge map. ¶[0133] discloses Edgels loaded from the OMap (e.g., based upon the initial pose) are projected on the generated distance transform of the binary image. For example, as illustrated in FIG. 18, the edgels 1802 are projected onto each of the distance transforms of the binary maps. The localization system optimizes the pose by determining a transformation that minimizes a value of the distance transform at the pixels corresponding to the projected edgel, where the values indicate, for each edgel, a distance of the pixel corresponding to the edgel to a nearest edge as indicated by the binary map. )
“and determine a pose of the vehicle based on a comparison of map data indicating the static environmental features with the distance transform image.” (Zhang, ¶[0087] discloses the system extracts prominent edges from captured images, which are quantized into points called edgels. The 3D locations of the edgels (as well as additional information such as gradient information) are computed using the captured images and depth information, and saved as part of stored map (e.g., an OMap). ¶[0088] discloses During localization, a localization system (e.g., the localization API 250 of FIG. 2) loads edgels from the map located near an estimated location, detects edges from captures images (e.g., live camera feeds), and optimizes the pose of the vehicle by aligning the edgels with detected edges. )
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the various embodiments of reference Zhang, wherein the combination would allow for the system of the claim to include a distance transform image in completing the steps to generate pixel values distances . One skilled in the art would have been motivated to modify Zhang in this manner in order to utilize the additional step of generating distances of pixel values of environmental features for degerming a pose of the vehicle. Therefore, one of ordinary skill in the art, would be capable to have combined the elements as claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to claim 1.
Claim 17 has been analyzed and is rejected for the reasons indicated in claim 1 above.
As per claim 2, The computer of claim 1, Zhang further disclose “wherein the instructions further include instructions to: calculate a value of a cost function based on the map data indicating the static environmental features and the distance transform image; and determine the pose of the vehicle that minimizes the value of the cost function.” (Zhang, ¶[0133] discloses the localization system optimizes the pose by determining a transformation that minimizes a value of the distance transform at the pixels corresponding to the projected edgel, where the values indicate, for each edgel, a distance of the pixel corresponding to the edgel to a nearest edge as indicated by the binary map. For example, the localization system may optimize the pose by minimizing the following cost function: Σloss(DT(P(e,T))). )
Claim 18 has been analyzed and is rejected for the reasons indicated in claim 2 above.
As per claim 3, The computer of claim 2, Zhang further disclose “wherein the instructions further include instructions to: project the map data indicating the static environmental features onto the distance transform image; and calculate the value of the cost function based on the pixel values of the pixels onto which the map data was projected.” (Zhang, ¶[0133] discloses Edgels loaded from the OMap (e.g., based upon the initial pose) are projected on the generated distance transform of the binary image. The localization system optimizes the pose by determining a transformation that minimizes a value of the distance transform at the pixels corresponding to the projected edgel, where the values indicate, for each edgel, a distance of the pixel corresponding to the edgel to a nearest edge as indicated by the binary map. For example, the localization system may optimize the pose by minimizing the following cost function:
Σloss(DT(P(e,T))). See ¶[0102].)
Claim 19 has been analyzed and is rejected for the reasons indicated in claim 3 above.
As per claim 5, The computer of claim 1, Zhang further disclose “wherein the static environmental features include lane lines.” (Zhang, ¶[0095-0096] discloses A gradient direction vector of an edgel may be computed based upon the gradient of the corresponding edge pixel of a captured image. See fig. 12. ¶[0101] discloses a first category of edgels include edgels corresponding to features that are permanent and stationary. As used herein, a feature may be considered “permanent” if the feature is not expected to change in shape for at least a threshold amount of time. Permanent and stationary features under this category may include lane line markings and curb falls. Due to the permanent and stationary nature of these features, edgels corresponding to such features are typically suitable for performing localization. ¶[0120] discloses system detects edges within the processed portion of the images, and may compute line segments corresponding to the detected edges. In addition, the localization system computes an intensity gradient (which may include both magnitude and orientation) for pixels on the detected edges. In some embodiments, lane line detection (or other types of object detection) may be performed on the image to identify edges corresponding to lane lines (or other types of features).)
As per claim 11, The computer of claim 1, Zhang further disclose “wherein the distance transform image is an image-plane distance transform image from a perspective of the camera.” (Zhang, Figures 17 and 18 show the distance transform image. related paragraphs [0132-0133].)
As per claim 12,The computer of claim 11, Zhang further disclose “wherein the static environmental features include linearly vertical features.” (Zhang, ¶[0071] discloses Examples of road signs described in an HD map include stop signs, traffic lights, speed limits, one-way, do-not-enter, yield (vehicle, pedestrian, animal), and so on.)
As per claim 13, The computer of claim 1, Zhang further disclose wherein the instructions further include instructions to: “generate a binary image depicting the static environmental features; and generate the distance transform image based on the binary image from a same perspective as the binary image.”(Zhang, ¶[0132] discloses A distance transform is applied on the edge map. FIG. 17 shows an image provided as input for computing a distance transform, according to an embodiment. IG. 18 shows the result of the distance transform on a binary image edge map derived from images captured. ¶[0133] discloses Edgels loaded from the OMap (e.g., based upon the initial pose) are projected on the generated distance transform of the binary image. For example, as illustrated in FIG. 18, the edgels 1802 are projected onto each of the distance transforms of the binary maps. )
Claim 20 has been analyzed and is rejected for the reasons indicated in claim 13 above.
As per claim 14, The computer of claim 13, Zhang further disclose “wherein the binary image depicts only the static environmental features.” (Zhang, ¶[0132] discloses binary image edge map. (lanes or curbs))
As per claim 15, The computer of claim 1, Zhang further disclose “wherein the pose includes two horizontal spatial dimensions and a heading.” (Zhang, ¶[0131] discloses the localization system needing to determine x, y, and yaw components for the transformation. )
As per claim 16,The computer of claim 1, Zhang further disclose “wherein the instructions further include instructions to actuate a component of the vehicle based on the pose of the vehicle.” (Zhang, ¶[0057] discloses For example, if the vehicle is currently at point A and the plan specifies that the vehicle should next go to a nearby point B, the control module 225 determines the control signals for the controls 130 that would cause the vehicle to go from point A to point B in a safe and smooth way, for example, without taking any sharp turns or a zig zag path from point A to point B. The path taken by the vehicle to go from point A to point B may depend on the current speed and direction of the vehicle as well as the location of point B with respect to point A. For example, if the current speed of the vehicle is high, the vehicle may take a wider turn compared to a vehicle driving slowly. ¶[0078] discloses Once the vehicle crosses the boundary 620 of the buffer at location 650c, the vehicle computing system 120 switches the current geographical region of the vehicle to geographical region 610b from 610a. The use of a buffer prevents rapid switching of the current geographical region of a vehicle as a result of the vehicle travelling along a route that closely tracks a boundary of a geographical region.)
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2021/0215489), in view of Adachi et al. (US 2020/0233095).
As per claim 4, The computer of claim 2, Zhang further disclose “initialize the first pose at the GNSS pose for minimizing the value of the cost function.” (Zhang, ¶[0111] discloses During localization, the localization system of the vehicle first obtains an initial estimate of a pose of the vehicle (also referred to as an “initial pose”). In some embodiments, the initial pose may be determined using a GPS navigation system, an IMU system…, ¶[0129] discloses The edgels may have been projected based upon the initial pose. ¶[0125] discloses the localization system may attempt to find a transformation that minimizes an aggregate distance between the set of projected edgels on the image and their corresponding edge pixels. In some embodiments, the localization system attempts to find a transform to minimize the following energy function. ¶[0133] discloses the localization system may optimize the pose by minimizing the following cost function: Σloss(DT(P(e,T))))
However Zhang does not explicitly disclose the following which would have been obvious in view of Adachi from similar filed of endeavor “wherein the pose is a first pose, and the instructions further include instructions to: determine a global navigation satellite system (GNSS) pose based on GNSS data” (Adachi, ¶[0011] discloses the enhanced GNSS position estimates are used to initialize the localization algorithms. ¶[0085] discloses determination of accurate vehicle location is an iterative process that initializes the vehicle position to a value based on the accurate GNSS location and then iteratively improves the vehicle location value based on HD map data and sensor data.¶[0090] discloses The GNSS data processing module 290 receives raw GNSS data from the GNSS receiver 950. The GNSS data processing module 290 initializes 1020 location of vehicle based on raw GNSS data. The GNSS data processing module 290 may initialize using raw GNSS or SBAS enhanced GNSS data, for example, using a GPS system .)
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Adachi technique of vehicle global navigation and map based localization into Zhang technique to provide the known and expected uses and benefits of Adachi technique over autonomous vehicle localization technique of Zhang. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement.
Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Adachi to Zhang in order to provide accurate and reliable positioning (Refer to Adachi paragraph [0007].)
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2021/0215489), in view of Atherton et al. (US 2021/0240195).
As per claim 6, The computer of claim 1, Zhang does not explicitly disclose the following which would have been obvious in view of Atherton from similar filed of endeavor “wherein the distance transform image is an overhead distance transform image from an overhead perspective.” (Atherton, Figure 4, ¶[0036] discloses the environmental features can be detected from a bird's-eye overhead perspective. ¶[0042] and ¶[0045] discloses FIGS. 6A and 6B illustrate (overhead view) binary image 520 overlaid on top of binary map 404 to illustrate alignment of binary image 520 and binary map 404 to determine a position and/or orientation of autonomous ground vehicle 110. The Euclidean distances can be obtained, for example, by performing a distance transform to obtain a distance map, which can include a matrix of all Euclidean distances from each non-zero pixel in binary image 520 to the closest corresponding non-zero pixel in binary map 404. Figure 6B, distance map 600 (overhead bird’s-eye view). )
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Atherton technique of determining position and orientation of a vehicle into Zhang technique to provide the known and expected uses and benefits of Atherton technique over autonomous vehicle localization technique of Zhang. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement.
Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Atherton to Zhang in order to provide accurate and reliable positioning (Refer to Atherton paragraph [0001].)
Claim(s) 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2021/0215489), in view of Atherton et al. (US 2021/0240195), further in view of Lu et al. (US 2018/0336697).
As per claim 7,The computer of claim 6, Zhang as modified by Athernton further discloses “wherein the pose is a first pose, and the instructions further include instructions to:” (Athernton, ¶[0046] discloses an initial relative position of binary image 520 to binary map 404 can be based on odometry information from autonomous ground vehicle 110. this information can be utilized to determine the initial relative positioning of binary image 520 to binary map 404.)
However Zhang as modified by Athernton does not explicitly disclose the following which would have been obvious in view of Lu from similar filed of endeavor “generate an image-plane distance transform image from a perspective of the camera;” (Lu ¶[0046] discloses at 502, the localization system 110 is initialized, as system initialization is described in detail below. At 504, time K, edges of the map elements may be detected in image Ik, obtained from the camera 120. At 505, at the same time K, the camera pose P′k may be predicted/guessed using the information of the last frame Pk-1, and odometry data Dk. ¶ [0049] discloses at 509, matching may be performed based on the 3D Map. As described above, Road Marking are represented by a small set of 3D points. From the odometry information, the camera pose P′k can be predicted at time K. As shown in FIG. 4, the small set of 3D points of Road Markings may be projected onto an image space. ¶ [0052] discloses the distance transform computed from the edge of the image. For any point, x the Chamfer distance may be queried from the distance transformation by interpolation.)
“and determine a second pose based on the first pose and based on a comparison of the map data indicating the static environmental features with the image-plane distance transform image.” (Lu, ¶[0049] discloses matching may be performed based on the 3D Map. As described above, Road Marking are represented by a small set of 3D points. From the odometry information, the camera pose P′k can be predicted at time K. As shown in FIG. 4, the small set of 3D points of Road Markings may be projected onto an image space. ¶[0050] discloses at 510, Chamfer matching may be performed to evaluate how well the projected points determined at 509, match against the detected features at 506, to estimate a camera pose. ¶ [0051] Chamfer matching essentially associates each projected point to a nearest edge pixel. The Chamfer distance can be efficiently computed from the Chamfer distance transform. ¶ [0052] discloses the distance transform computed from the edge of the image. For any point, the Chamfer distance may be queried. ¶[0059] discloses optimization to minimize the cost function, ¶[0061] discloses the optimized data is utilized to determine a camera pose estimate. ¶[0063].)
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Lu technique of vehicle localization in urban environment into Zhang as modified by Atherton technique to provide the known and expected uses and benefits of Lu technique over autonomous vehicle localization technique of Zhang as modified by Atherton. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement.
Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Lu to Zhang as modified by Atherton in order to provide accurate localization. (Refer to Lu paragraph [0003].)
As per claim 8, The computer of claim 7, Zhang as modified by Atherton as modified by Lu further disclose “wherein the instructions further include instructions to: calculate a value of a cost function based on the map data indicating the static environmental features and the image-plane distance transform image; and determine the second pose of the vehicle that minimizes the value of the cost function.” (Lu, ¶[0059] discloses the optimization formulation may performed. Given Pk-1, Pk may be estimated by minimizing the cost function, ¶[0061] discloses the optimized data may be utilized to determine a camera pose estimate. The camera pose estimate may be implemented onto a map.)
As per claim 9, The computer of claim 8, Zhang as modified by Atherton as modified by Lu further disclose “wherein the instructions further include instructions to initialize the second pose at the first pose for minimizing the value of the cost function.” (Lu, ¶[0059] discloses the optimization formulation may performed. Given Pk-1, Pk may be estimated by minimizing the cost function, ¶[0061] discloses the optimized data may be utilized to determine a camera pose estimate. The camera pose estimate may be implemented onto a map. ¶[0063-0065].)
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2021/0215489), in view of Atherton et al. (US 2021/0240195), in view of Lu et al. (US 2018/0336697), further in view of Beauvisage eta l. (US 2023/ 0365154).
As per claim 10, The computer of claim 7, However Zhang as modified by Athernton as modified by Lu does not explicitly disclose the following which would have been obvious in view of Beauvisage from similar filed of endeavor “wherein the first pose includes only two horizontal spatial dimensions and a heading; and the second pose includes three spatial dimensions and three angular dimensions.” (Beauvisage, ¶[0039] discloses The state of the vehicle in the context of this disclosure can be construed as having three physical states, namely the longitude, the latitude and the heading of the vehicle. The longitude and the latitude are defined with respect to a geographical coordinate system such as the Cartesian coordinate system and indicate the longitudinal position and lateral position of the vehicle on the road portion. The heading of the vehicle indicates the compass direction of the vehicle with respect to the geographical north 120 and is typically understood as an angle (θ) between a vector 100 of a forward-orientation of the vehicle and a center line 110 extending from the vehicle towards the geographical north. The state of the vehicle may also be referred to as a pose of the vehicle. The pose is in some embodiments represented by a 2D Cartesian position and a yaw of the vehicle (x, y, θ). However, in some embodiments, the pose is a 6D pose where the position is defined by a 3D Cartesian position and the orientation is defined by a roll, pitch, and yaw of the vehicle.)
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Beauvisage technique of vehicle localization in urban environment into Zhang as modified by Atherton as modified by Lu technique to provide the known and expected uses and benefits of Beauvisage technique over autonomous vehicle localization technique of Zhang as modified by Atherton as modified by Lu. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement.
Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Beauvisage to Zhang as modified by Atherton as modified by Lu in order to accurately determine the state of a vehicle on the road (Refer to Beauvisage paragraph [0002].)
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAGHAYEGH AZIMA/Examiner, Art Unit 2671