DETAILED ACTION
Status of the Application
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
This action is in response to the applicant’s filing on October 01, 2024. Claims 1 – 20 are pending and examined below.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on October 01, 2024 and March 19, 2025 have been considered by the Examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and 103 (or as subject to pre-AIA 35 U.S.C. § 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 5, 7, 17, and 19 - 20 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0214457 A1 to LIANG et al. (herein after "Liang") in view of U.S. Patent Application Publication No. US 2022/0207855 A1 to LU et al. (herein after "Lu").
(Note: Claim language is in bold typeface, and the Examiner’s comments and cited passages from the prior art reference(s) are in normal typeface.)
As to Claim 1,
Liang discloses an autonomous vehicle control system for an autonomous vehicle (see Figs. 1 -2, ¶0037 ~ regarding an autonomous vehicle and ¶0082 ~ regarding wherein a vehicle 102 utilizing a vehicle computing system 112 and an autonomy computing system 120 which performs to an autonomous (self-driving) vehicle operations control schema and acquires map data 122, wherein map data 122 includes, but is not limited to, traffic lanes, boundary markings, road segments, objects, such as, lampposts, crosswalks, traffic control data, traffic lights, etc.), comprising:
one or more processors (see Fig. 1, ¶0056, ¶0072, and ¶0205, computing system 104 comprises a processor(s)); and
memory storing instructions that, when executed by the one or more processors (see Fig. 1, ¶0056, ¶0072, and ¶0205, computing system 104 comprises a processor(s) 812 which includes memory 814 components to perform operations),
cause the autonomous vehicle control system to:
store dense map data describing a ground surface of a first portion of an environment within which the autonomous vehicle operates (see Figs. 2 - 3, 5 ~ illustrates an acquisition of sensory data which is compiled into detections summary, and ¶0096, map data 122 stored in HD map database 216 is the result of map data 122 being compiled from perception system 124 detects the environment ("world") on the peripheral of the vehicle 102. See also ¶0070, ¶0082, ¶008), the dense map data further including semantic content for the first portion of the environment (see Figs. 2 - 3, 5, ¶0036, ¶0040, and ¶0114, Liang teaches a dense map (HD maps) comprised of map data from a ground surface, having relative semantic data (road features) as perceived from vehicle 102);
store sparse map data describing a ground surface of a second portion of the environment within which the autonomous vehicle operates,
wherein the first and second portions are different from one another (see Fig. 13 ~ process method steps 606 - 610, ¶0154, ¶0171-¶0173, Liang teaches a dense map (HD map) wherein a first portion of map is characterized; and then conversely teaches another portion of the map (second portion) being characterized by LIDAR acquired data (sparse map data). Sparse map data is stored between map database 216 and map data 122 as prescribed by a machine-learned map estimation model, wherein the sparse map (is fundamentally a map-modified LIDAR and bird's eye view representation of an estimated "geographic prior data" and "semantic road prior data")), and
in response to determining that the autonomous vehicle is located in the second portion of the environment: receive perception data describing at least one perceived boundary for a roadway in the environment that is sensed by at least one perception sensor of the autonomous vehicle during operation of the autonomous vehicle on the roadway (see Fig. 14 ~ process method step 650, Fig. 14 ~ process method step 756, ¶0038, ¶0051, and ¶0082, autonomy sensor data 116, working as a perception sensor, receives "map-modified LIDAR data"; then fuses the sensor data with objects of interest from the environment that can project a bird's eye view of the region, including objects which include but not are not limited to, other vehicles, bicycles, pedestrians, etc. and bounding shapes (perimeter / boundaries) which position / orient these objects of interest);
augment the sparse map data to generate augmented sparse map data (see ¶0041 and ¶0065. In particular, see ¶0097-¶0098, discloses a vehicle 102 utilizing a vehicle computing system 112 and an autonomy computing system 120 wherein when HD map data is unavailable, map estimation system 218 fills in data ~ less / low resolution data map portions of a map, in order to make the aggregate map data more robust and granular with map details and integrity),
wherein the augmented sparse map data describes a pathway for use in operating the autonomous vehicle in a perceived lane defined by the at least one perceived boundary (see Fig. 2, ¶0099-¶0103, discloses a map fusion system 220 which further compiles the aggregate map data from the limitation directly above herein, to outline a route (pathway) wherein the vehicle 102 can safely and efficiently traverse around perceived boundaries, including but not limited to, bounding shapes 236. See also ¶0041 and ¶0065); and
control the autonomous vehicle using the augmented sparse map data. (See Figs. 11, 18, and ¶0232-¶0233. In particular, see Fig. 11 ~ process method step 514, vehicle 102 utilizes motion planning unit 914 to provide with pathway (route) and controlled using the aggregate map).
As shown above, Liang discloses an autonomous vehicle control system using an aggregate map of dense and sparse map data (see Figs. 11, 18, and ¶0232-¶0233. In particular, see Fig. 11 ~ process method step 514), but does not explicitly disclose wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle.
On the other hand, Liu’s cloud edge end cooperative control method for security rescues discloses the autonomous vehicle control system wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle. (See ¶0048 - ¶0049 ~ regarding image data of target ground regions (rescue zones) is acquired, ¶0056 ~ " in FIG. 1, a cloud-edge-end cooperative control method of… a sparse landmark map building step", and ¶0069 ~ regarding cloud computation).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to further provide Liang’s autonomous vehicle control system with the cloud edge computing, as taught by Liu, where the resultant combination would successfully provide generating sparse map data remote from the autonomous vehicle, thereby enabling benefits, including but not limited to: reducing onboard computer processing burden.
As to Claim 2,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 1,
wherein the one or more processors are further configured to,
in response to determining that the autonomous vehicle is located in the first portion of the environment,
control the autonomous vehicle using the dense map data and
the semantic content thereof for the first portion of the environment. (See Figs. 11, 18, and ¶0232 - ¶0233; Liang).
As to Claim 3,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 1,
wherein the sparse map data lacks semantic content for the second portion of the environment. (See Fig. 3 ~ geometric ground data (geometric ground prior 262; and semantic road prior 264, Fig. 12 ~ process method step 552, ¶0163, and ¶0198-¶0201; Liang. In particular, see Fig. 12; Liang ~ process method step 552 and Fig. 16 ~ process method steps 752 - 758, wherein sparse map data is taught to lack semantic content for second portion of the environment).
As to Claim 4,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 1,
wherein the dense map data describing the ground surface of the first portion of the environment includes one or more
outer road boundaries for one or more roadways in the first portion of the environment and
the sparse map data describing the ground surface of the second portion of the environment includes one or more outer road boundaries for one or more roadways in the second portion of the environment. (See Fig. 13 ~ process method steps 606 - 610, ¶0171-¶0173; Liang).
As to Claim 5,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 1,
wherein the dense map data describing the ground surface of the first portion of the environment includes localization data for pose determination within the first portion of the environment (see ¶0158; Liang, Liang discloses a machine-learned detector model 230 wherein dense map data (HD map data) characterizes ground surface data of first portion of the environment for pose (orientation) determination within the first portion of the environment) and
the sparse map data describing the ground surface of the second portion of the environment includes localization data for pose determination in the second portion of the environment. (Following the former, Liang further teaches wherein Fig. 15 ~ process method steps 712 - 716, ¶0158 and ¶0193, sparse map data characterizes ground surface data of the second portion of the environment for pose (orientation) determination within the second portion of the environment).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to further provide Liang’s autonomous vehicle control system with semantic content in the dense map data, as taught by Poulet, with a reasonable expectation of success as the resultant combination would provide partitioning the route into route segments and assigning respective drive modes to the route segments, thereby enabling benefits, including but not limited to: optimizing end-to-end route generation relative to constraints. (See ¶0070, ¶0098, and ¶0102-¶0103 of Poulet).
As to Claim 7,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 1,
wherein the sparse map data is generated in response to detection of one or more modified road boundaries in the second portion of the environment, and
wherein the sparse map data describing the ground surface of the second portion of the environment includes localization data for pose determination in the second portion of the environment that is reused from dense map data describing the ground surface of the second portion of the environment. (See ¶0085; Liang ~ position determination is performed by perception system 124 as one of several inputs acquired from environmental data acquisition).
As to Claim 17,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 10,
wherein the sparse map data is received wirelessly via an over-the-air map update. (See Abstract ~ regarding map data being received over air via cloud edge computing).
As to Claim 19,
Liang discloses a method of operating an autonomous vehicle (see Figs. 1 -2, ¶0037 ~ regarding an autonomous vehicle and ¶0082 ~ regarding wherein a vehicle 102 utilizing a vehicle computing system 112 and an autonomy computing system 120 which performs to an autonomous (self-driving) vehicle operations control schema and acquires map data 122, wherein map data 122 includes, but is not limited to, traffic lanes, boundary markings, road segments, objects, such as, lampposts, crosswalks, traffic control data, traffic lights, etc.), comprising:
storing dense map data describing a ground surface of a first portion of an environment within which the autonomous vehicle operates, (see Figs. 2 - 3, 5 ~ illustrates an acquisition of sensory data which is compiled into detections summary, and ¶0096, map data 122 stored in HD map database 216 is the result of map data 122 being compiled from perception system 124 detects the environment ("world") on the peripheral of the vehicle 102. See also ¶0070, ¶0082) the dense map data further including semantic content for the first portion of the environment (see Figs. 2 - 3, 5, ¶0036, ¶0040, and ¶0114, Liang teaches a dense map (HD maps) comprised of map data from a ground surface, having relative semantic data (road features) as perceived from vehicle 102);
storing sparse map data describing a ground surface of a second portion of the environment within which the autonomous vehicle operates,
wherein the first and second portions are different from one another (see Fig. 13 ~ process method steps 606 - 610, ¶0154, ¶0171-¶0173, Liang teaches a dense map (HD map) wherein a first portion of map is characterized; and then conversely teaches another portion of the map (second portion) being characterized by LIDAR acquired data (sparse map data). Sparse map data is stored between map database 216 and map data 122 as prescribed by a machine-learned map estimation model, wherein the sparse map (is fundamentally a map-modified LIDAR and bird's eye view representation of an estimated "geographic prior data" and "semantic road prior data")), and
wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle; and
in response to determining that the autonomous vehicle is located in the second portion of the environment:
receiving perception data describing at least one perceived boundary for a roadway in the environment that is sensed by at least one perception sensor of the autonomous vehicle during operation of the autonomous vehicle on the roadway (see Fig. 14 ~ process method step 650, Fig. 14 ~ process method step 756, ¶0038, ¶0051, and ¶0082, autonomy sensor data 116, working as a perception sensor, receives "map-modified LIDAR data"; then fuses the sensor data with objects of interest from the environment that can project a bird's eye view of the region, including objects which include but not are not limited to, other vehicles, bicycles, pedestrians, etc. and bounding shapes (perimeter / boundaries) which position / orient these objects of interest);
augmenting the sparse map data to generate augmented sparse map data (see ¶0041 and ¶0065. In particular, see ¶0097-¶0098, discloses a vehicle 102 utilizing a vehicle computing system 112 and an autonomy computing system 120 wherein when HD map data is unavailable, map estimation system 218 fills in data ~ less / low resolution data map portions of a map, in order to make the aggregate map data more robust and granular with map details and integrity), wherein the augmented sparse map data describes a pathway for use in operating the autonomous vehicle in a perceived lane defined by the at least one perceived boundary (see Fig. 2, ¶0099-¶0103, discloses a map fusion system 220 which further compiles the aggregate map data from the limitation directly above herein, to outline a route (pathway) wherein the vehicle 102 can safely and efficiently traverse around perceived boundaries, including but not limited to, bounding shapes 236. See also ¶0041 and ¶0065); and
controlling the autonomous vehicle using the augmented sparse map data. (See Figs. 11, 18, and ¶0232-¶0233. In particular, see Fig. 11 ~ process method step 514, vehicle 102 utilizes motion planning unit 914 to provide with pathway (route) and controlled using the aggregate map).
As shown above, Liang discloses an autonomous vehicle control system using an aggregate map of dense and sparse map data (see Figs. 11, 18, and ¶0232-¶0233. In particular, see Fig. 11 ~ process method step 514), but does not explicitly disclose wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle.
Conversely, Liu’s cloud edge end cooperative control method for security rescues discloses the autonomous vehicle control system wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle. (See ¶0048 - ¶0049 ~ regarding image data of target ground regions (rescue zones) is acquired, ¶0056 ~ " in FIG. 1, a cloud-edge-end cooperative control method of… a sparse landmark map building step", and ¶0069 ~ regarding cloud computation).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to further provide Liang’s autonomous vehicle control system with the cloud edge computing, as taught by Liu, where the resultant combination would successfully provide generating sparse map data remote from the autonomous vehicle, thereby enabling benefits, including but not limited to: reducing onboard computer processing burden.
As to Claim 20,
Liang discloses a non-transitory computer readable storage medium storing computer instructions executable by one or more processors to perform a method of operating an autonomous vehicle (see ¶0008 - ¶0009 ~ non-transitory computer readable storage medium in an autonomous vehicle control platform), the method comprising:
storing dense map data describing a ground surface of a first portion of an environment within which the autonomous vehicle operates (see Figs. 2 - 3, 5 ~ illustrates an acquisition of sensory data which is compiled into detections summary, and ¶0096, map data 122 stored in HD map database 216 is the result of map data 122 being compiled from perception system 124 detects the environment ("world") on the peripheral of the vehicle 102. See also ¶0070, ¶0082, ¶008), the dense map data further including semantic content for the first portion of the environment (see Figs. 2 - 3, 5, ¶0036, ¶0040, and ¶0114, Liang teaches a dense map (HD maps) comprised of map data from a ground surface, having relative semantic data (road features) as perceived from vehicle 102);
storing sparse map data describing a ground surface of a second portion of the environment within which the autonomous vehicle operates,
wherein the first and second portions are different from one another (see Fig. 13 ~ process method steps 606 - 610, ¶0154, ¶0171-¶0173, Liang teaches a dense map (HD map) wherein a first portion of map is characterized; and then conversely teaches another portion of the map (second portion) being characterized by LIDAR acquired data (sparse map data). Sparse map data is stored between map database 216 and map data 122 as prescribed by a machine-learned map estimation model, wherein the sparse map (is fundamentally a map-modified LIDAR and bird's eye view representation of an estimated "geographic prior data" and "semantic road prior data")), and
wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle; and
in response to determining that the autonomous vehicle is located in the second portion of the environment: receiving perception data describing at least one perceived boundary for a roadway in the environment that is sensed by at least one perception sensor of the autonomous vehicle during operation of the autonomous vehicle on the roadway (see Fig. 14 ~ process method step 650, Fig. 14 ~ process method step 756, ¶0038, ¶0051, and ¶0082, autonomy sensor data 116, working as a perception sensor, receives "map-modified LIDAR data"; then fuses the sensor data with objects of interest from the environment that can project a bird's eye view of the region, including objects which include but not are not limited to, other vehicles, bicycles, pedestrians, etc. and bounding shapes (perimeter / boundaries) which position / orient these objects of interest);
augmenting the sparse map data to generate augmented sparse map data (see ¶0041 and ¶0065. In particular, see ¶0097-¶0098, discloses a vehicle 102 utilizing a vehicle computing system 112 and an autonomy computing system 120 wherein when HD map data is unavailable, map estimation system 218 fills in data ~ less / low resolution data map portions of a map, in order to make the aggregate map data more robust and granular with map details and integrity), wherein the augmented sparse map data describes a pathway for use in operating the autonomous vehicle in a perceived lane defined by the at least one perceived boundary (see Fig. 2, ¶0099-¶0103, discloses a map fusion system 220 which further compiles the aggregate map data from the limitation directly above herein, to outline a route (pathway) wherein the vehicle 102 can safely and efficiently traverse around perceived boundaries, including but not limited to, bounding shapes 236. See also ¶0041 and ¶0065); and
controlling the autonomous vehicle using the augmented sparse map data. (See Figs. 11, 18, and ¶0232-¶0233. In particular, see Fig. 11 ~ process method step 514, vehicle 102 utilizes motion planning unit 914 to provide with pathway (route) and controlled using the aggregate map).
Then Liu is introduced to disclose an autonomous vehicle control method wherein the sparse map data describing the ground surface of the second portion of the environment is generated remote from the autonomous vehicle. (See ¶0048 - ¶0049; Liu ~ regarding image data of target ground regions (rescue zones) is acquired and ¶0056; Liu ~ " in FIG. 1, a cloud-edge-end cooperative control method of… a sparse landmark map building step", and ¶0069 ~ regarding cloud computation).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to further provide Liang’s autonomous vehicle control system with the cloud edge computing, as taught by Liu, where the resultant combination would successfully provide generating sparse map data remote from the autonomous vehicle, thereby enabling benefits, including but not limited to: reducing onboard computer processing burden.
Claim 6 is rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0214457 A1 to LIANG et al. (herein after "Liang") in view of U.S. Patent Application Publication No. US 20220207855 A1 to LU et al. (herein after "Lu") as to claim 1 above, and further in view of U.S. Patent Application Publication No. US 2022/0063660 A1 to POULET et al. (herein after "Poulet").
As to Claim 6,
Liang/Liu substantially discloses the autonomous vehicle control system of claim 1.
As shown above, Liang discloses an autonomous vehicle control system using an aggregate map of dense and sparse map data (see Figs. 11, 18, and ¶0232-¶0233. In particular, see Fig. 11 ~ process method step 514), but does not explicitly disclose but does not explicitly disclose wherein the semantic content in the dense map data includes one or more mapped boundaries,
one or more mapped lanes,
one or more speed limits,
one or more mapped signs and/or
one or more traffic signals.
Poulet, on the other hand, discloses wherein the semantic content in the dense map data includes one or more mapped boundaries, one or more mapped lanes, one or more speed limits, one or more mapped signs and/or one or more traffic signals. (Poulet discloses an autonomous driving system (Figs. 3 – 4 and ¶0019) teaching acquisition of image data around a peripheral of the autonomous vehicle (vehicle 1) as in ¶0084, ¶0097, ¶0105, and ¶0119 and inputs this as semantic data in HD maps traffic light, a sign, and one or more boundaries represented as road boundary lines and / or bumps in the road, and speed limits as in ¶0006 and ¶0020-¶0021).
Allowable Subject Matter
Claims 8 – 16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In particular, the available prior art appears to be silent in disclosing the autonomous vehicle control system of claim 1,
wherein the one or more processors are further configured to:
store dense map data describing a ground surface of the second portion of the environment and including semantic content for the second portion of the environment;
control the autonomous vehicle using the dense map data and the semantic content thereof for the second portion of the environment;
after storing the dense map data for the second portion of the environment and
controlling the autonomous vehicle using the dense map data and the semantic content thereof for the second portion of the environment,
receive the sparse map data; and
after receiving the sparse map data, invalidate the dense map data for the second portion of the environment. Emphasis added.
The prior art does not appear to explicitly teach or disclose the above recited claim limitations.
To that end and although further search and consideration would always need to be performed based upon any submitted amendments by the Applicant, it is the Examiner’s position that incorporating these above recited claim limitations into independent claims 1 and 19 - 20 might possibly advance prosecution.
Conclusion
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to ASHLEY L. REDHEAD, JR. whose telephone number is (571) 272 - 6952. The Examiner can normally be reached on weekdays, Monday through Thursday, between 7 a.m. and 5 p.m.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, Peter Nolan can be reached Monday through Friday, between 9 a.m. and 5 p.m. at (571) 270 – 7016. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASHLEY L REDHEAD JR./Primary Examiner, Art Unit 3661