Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This FINAL communication is in response to application No. 18/282,869 filed on 12/09/2025. Claims 1-2 are currently pending and have been examined. Claims 1-2 have been rejected as follows.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/19/2023 is being considered by the examiner.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments
Applicant’s amendment and/or arguments with respect to the rejection of claims under 35 USC 112(b) and 35 USC 101 as set forth in the office action of 09/09/2025 have been considered and are PERSUASIVE. Therefore, the claim rejection of claims under 35 USC 112(b) and 35 USC 101 as set forth in the office action of 09/09/2025 have been withdrawn.
Applicant’s amendment and/or arguments with respect to the rejection of claims under 35 USC 103 as set forth in the office action of 09/09/2025 have been considered and:
The applicant’s argument’s addressing Gupta not teaching a camera facing the ground as well as the optical axis position of the camera has been considered and is PERSUASIVE. However, search has been updated.
The applicant’s arguments regarding Gupta not teaching the position information set associated with the coordinate point has been considered and is NOT PERSUASIVE. As described in the specification, the positional information set determines the position of the vehicle based on the displacement of the captured image to a stored image, which is explicitly taught by Gupta in paragraphs [54] and [55].
Furthermore, Applicant’s amendments and/or arguments with respect to the rejection of claims 1-2 under 35 USC 103 as set forth in the office action of 09/09/2025 have been considered but are moot because the new ground(s) of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 are rejected under 35 U.S.C 103 as being unpatentable over Gupta (US 20120050489 A1) in view of Zavodny (US 11566902 B) and Fridman (BR 112019000918 B1).
Regarding claim 1, Gupta teaches a storage device configured to store a map data set that associates a positional information set with each of multiple map image data sets obtained by capturing images of the road surface in advance; and (see at least ¶ [0027, 0048, 0055]; "The map imagery module 130 may store map images locally…Each retrieved map image includes location information describing the location associated with the map image..Once the displacement between the selected map image and the retrieved vehicle image is determined, the displacement module 360 determines the location of the vehicle 100, the refined vehicle location 370. The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information. .") Gupta describes a storage devices configured to store a map data set that associates positional information with each of multiple map image data sets obtained by capturing images of the road surface in advance.
circuitry configured to obtain an image data set from an image captured by the camera and estimate a self-location using the map data set and the image data set, (see at least ¶ [0054, 0055]; "Once the displacement between the selected map image and the retrieved vehicle image is determined, the displacement module 360 determines the location of the vehicle 100, the refined vehicle location 370.…Once the displacement between the selected map image and the retrieved vehicle image is determined, the displacement module 360 determines the location of the vehicle 100, the refined vehicle location 370.") Gupta describes a module with circuitry configured to obtain data image set from an image captured by the camera and estimate a self location using the map data set and image data set.
identify one of the map image data sets that corresponds to the clipped image data set by executing a matching process between the clipped image data set and at least one of the map image data sets, (see at least ¶ [0068]; "Each map image includes location information describing a location associated with the map image. 3D features are identified 620 in the retrieved vehicle image and in the retrieved map images. The 3D features are aligned 630 between the vehicle image and each retrieved map image. A map image is selected 640 based on the alignment between the 3D features in the vehicle image and each map image. For example, the map image with the most common 3D features with the vehicle image is selected.") Gupta describes identifying a map image data set that corresponds to a clipped image data set by executing a matching process between the clipped image data set and a stored map image data set.
estimate the self-location from a relative positional relationship between the identified map image data set and the clipped image data set, (see at least ¶ [0068, 0055]; "The displacement is determined 650 from the selected map image's ground truth location based on the selected map image and the vehicle image. The displacement is applied 660 to the ground truth location to determine a refined vehicle location for the vehicle. In one embodiment, the refined vehicle location is determined…Once the displacement between the selected map image and the retrieved vehicle image is determined, the displacement module 360 determines the location of the vehicle 100, the refined vehicle location 370. The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information") Gupta describes estimating the self location from a relative positional relationship between the identified map image data set and the clipped image data set.
Gupta does not explicitly teach An autonomous vehicle, comprising:
a camera disposed on a bottom of the autonomous vehicle so as to face a road surface under the autonomous vehicle in order to capture images of the_road surface,wherein an optical axis of the camera faces the road surface under the autonomous vehicle; wherein the circuitry is configured to obtain a clipped image data set by clipping a predetermined range from the image data set, Wherein each positional information set is associated with a coordinate point representing a position of the optical axis of the camera in a coordinate system representing a pixel position of the corresponding map image data set, and autonomously control a traveling motor driver and a steering motor driver of the autonomous vehicle to move the autonomous vehicle to a target location based on the self- location that is estimated.
However, Zadovny teaches An autonomous vehicle, comprising:
a camera disposed on a bottom of the autonomous vehicle so as to face a road surface under the autonomous vehicle in order to capture images of the_road surface,wherein an optical axis of the camera faces the road surface under the autonomous vehicle; (see at least [4]; "The camera system may be oriented to take images of the surface passed over by the host vehicle. An imaging device 101 may be situated beneath the host vehicle 100 and may be positioned to the front, center, or rear section of the host vehicle. The imaging device 101 may have a known orientation and position relative to the geometry of the host vehicle.") Zavodny describes a camera disposed on the bottom of the autonomous vehicle, facing the road to capture images of the road surfaces under the autonomous vehicle.
wherein the circuitry is configured to obtain a clipped image data set by clipping a predetermined range from the image data set, (see at least ¶ [18]; "The camera system may be configured to identify micro-features in the road surface that exhibit lengths and widths less than 5 cm.") Zadovny describes clipping a predetermined range of 5cm from the image data set.
Wherein each positional information set is associated with a coordinate point representing a position of the optical axis of the camera in a coordinate system representing a pixel position of the corresponding map image data set, and (see at least [12]; "In a SECOND STEP, the pose (i.e. position and orientation) of each image may be determined by the image processor at the time the image is collected or later. The pose of each reference image(s) may be determined in relation to other reference image(s), other landmarks or locations, coordinates such as latitude and longitude, or elements on a map. Pose determination may be performed via a GPS method, in which GPS coordinates are measured in parallel to image collection and are associated with reference images,") Zavodny describes positional information associated with a coordinate point representing a position of the optical axis of the camera in a coordinate system, to include a pixel position of the corresponding map image data set.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta to incorporate teachings of Zadovny which teaches a camera underneath a vehicle to take pictures underneath the vehicle of the road, circuitry to obtain the images, clipping a predetermined range of the image, and associating positional information with coordinate points of the positional axis of the camera in order to be able to use the photo taken from the camera, extract identifiable features from a reasonable range, and orient the vehicle location based on the coordinate information gathered.
Gupta and Zadovny, in combination, do not explicitly teach autonomously control a traveling motor driver and a steering motor driver of the autonomous vehicle to move the autonomous vehicle to a target location based on the self- location that is estimated.
However, Fridman teaches autonomously control a traveling motor driver and a steering motor driver of the autonomous vehicle to move the autonomous vehicle to a target location based on the self- location that is estimated. (see at least [0142]; "For example, when vehicle 200 navigates without human intervention, system 100 may automatically control the braking, acceleration, and/or steering of vehicle 200 (e.g., by sending control signals to one or more of the throttling system 220, braking system 230 and steering system 240). Additionally, system 100 may analyze the collected data and issue warnings and/or alerts to vehicle occupants based on the analysis of the collected data." Fridman outlines autonomously controlling a traveling motor driver and steering motor driver of the autonomous vehicle to move the vehicle to a target location based on collected data, which could be a self location that is estimated.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta to incorporate the teachings of Fridman in order for the vehicle to have an awareness of it’s location as it autonomously travels along a route.
Regarding claim 2, Gupta teaches storing, in a storage device, a map data set that associates a positional information set with each of multiple map image data sets obtained by capturing images of the road surface in advance; (see at least ¶ [0027, 0048, 0055]; "The map imagery module 130 may store map images locally…Each retrieved map image includes location information describing the location associated with the map image..Once the displacement between the selected map image and the retrieved vehicle image is determined, the displacement module 360 determines the location of the vehicle 100, the refined vehicle location 370. The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information. .") Gupta describes a storage devices configured to store a map data set that associates positional information with each of multiple map image data sets obtained by capturing images of the road surface in advance.
obtaining an image data set from an image captured by the camera; (see at least ¶ [0035]; "The visual road classification module 220 retrieves an image of the road on which the vehicle is driving, in the direction in which the vehicle is being driven, from the camera 110.") Gupta describes circuitry configured to obtain data image set from an image captured by the camera .
Identifying one of the map image data sets that corresponds to the clipped image data set by executing a matching process between the clipped image data set and at least one of the map image data sets; (see at least ¶ [0068]; "Each map image includes location information describing a location associated with the map image. 3D features are identified 620 in the retrieved vehicle image and in the retrieved map images. The 3D features are aligned 630 between the vehicle image and each retrieved map image. A map image is selected 640 based on the alignment between the 3D features in the vehicle image and each map image. For example, the map image with the most common 3D features with the vehicle image is selected.") Gupta describes identifying a map image data set that corresponds to a clipped iamge data set by executing a matching process between the clipped image data set and a stored map image data set.
estimating the self-location from a relative positional relationship between the identified map image data set and the clipped image data set, (see at least ¶ [0068, 0055]; "The displacement is determined 650 from the selected map image's ground truth location based on the selected map image and the vehicle image. The displacement is applied 660 to the ground truth location to determine a refined vehicle location for the vehicle. In one embodiment, the refined vehicle location is determined…Once the displacement between the selected map image and the retrieved vehicle image is determined, the displacement module 360 determines the location of the vehicle 100, the refined vehicle location 370. The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information") Gupta describes estimating the self location from a relative positional relationship between the identified map image data set and the clipped image data set.
Gupta does not explicitly teach A self-location estimating method in an autonomous vehicle, the method comprising: capturing images of a road surface with a camera disposed on a bottom of the autonomous vehicle so as to face a road surface under the autonomous vehicle in order to capture images of the road surface, wherein an optical axis of the camera faces the road surface under the autonomous vehicle, obtaining a clipped image data set by clipping a predetermined range from the image data set; wherein each positional information set is associated with a coordinate point representing a position of the optical axis of the camera in a coordinate system representing a pixel position of the corresponding map image data set, and autonomously controlling a traveling motor driver and a steering motor driver of the autonomous vehicle to move the autonomous vehicle to a target location based on the self- location that is estimated.
However, Zadovny teaches A self-location estimating method in an autonomous vehicle, the method comprising:
capturing images of a road surface with a camera disposed on a bottom of the autonomous vehicle so as to face a road surface under the autonomous vehicle in order to capture images of the road surface, wherein an optical axis of the camera faces the road surface under the autonomous vehicle; (see at least [4]; "The camera system may be oriented to take images of the surface passed over by the host vehicle. An imaging device 101 may be situated beneath the host vehicle 100 and may be positioned to the front, center, or rear section of the host vehicle. The imaging device 101 may have a known orientation and position relative to the geometry of the host vehicle.") Zavodny describes a camera disposed on the bottom of the autonomous vehicle, facing the road to capture images of the road surfaces under the autonomous vehicle.
obtaining a clipped image data set by clipping a predetermined range from the image data set; (see at least ¶ [18]; "The camera system may be configured to identify micro-features in the road surface that exhibit lengths and widths less than 5 cm.") Zadovny describes clipping a predetermined range of 5cm from the image data set
wherein each positional information set is associated with a coordinate point representing a position of the optical axis of the camera in a coordinate system representing a pixel position of the corresponding map image data set, and (see at least [12]; "In a SECOND STEP, the pose (i.e. position and orientation) of each image may be determined by the image processor at the time the image is collected or later. The pose of each reference image(s) may be determined in relation to other reference image(s), other landmarks or locations, coordinates such as latitude and longitude, or elements on a map. Pose determination may be performed via a GPS method, in which GPS coordinates are measured in parallel to image collection and are associated with reference images,") Zavodny describes positional information associated with a coordinate point representing a position of the optical axis of the camera in a coordinate system, to include a pixel position of the corresponding map image data set.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta to incorporate teachings of Zadovny which teaches a camera underneath a vehicle to take pictures underneath the vehicle of the road, circuitry to obtain the images, clipping a predetermined range of the image, and associating positional information with coordinate points of the positional axis of the camera in order to be able to use the photo taken from the camera, extract identifiable features from a reasonable range, and orient the vehicle location based on the coordinate information gathered.
Gupta and Zadovny, in combination, do not explicitly teach autonomously control a traveling motor driver and a steering motor driver of the autonomous vehicle to move the autonomous vehicle to a target location based on the self- location that is estimated.
However, Fridman teaches autonomously control a traveling motor driver and a steering motor driver of the autonomous vehicle to move the autonomous vehicle to a target location based on the self- location that is estimated. (see at least [0142]; "For example, when vehicle 200 navigates without human intervention, system 100 may automatically control the braking, acceleration, and/or steering of vehicle 200 (e.g., by sending control signals to one or more of the throttling system 220, braking system 230 and steering system 240). Additionally, system 100 may analyze the collected data and issue warnings and/or alerts to vehicle occupants based on the analysis of the collected data." Fridman outlines autonomously controlling a traveling motor driver and steering motor driver of the autonomous vehicle to move the vehicle to a target location based on collected data, which could be a self location that is estimated.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta to incorporate the teachings of Fridman in order for the vehicle to have an awareness of it’s location as it autonomously travels along a route.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANA VICTORIA HALL whose telephone number is (571)272-5289. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at 5712724896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HANA VICTORIA HALL/Examiner, Art Unit 3664
/RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664