DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to amendments and remarks filed on 12/16/2025. Claim(s) 1, 5-7, and 15 have been amended. Claim(s) 2-3, 9-12, 14, 16-17, and 21-25 have been cancelled. Claim(s) 26-33 have been added. Claim(s) 1, 4-8, 13, 15, 18-20, and 26-33 are pending examination. Objections to the drawings, specification, and Claims 7-8 have been withdrawn in light of the instant amendments. Rejection to claim(s) 9 over the 112(d) rejection has been withdrawn in light of the instant amendments. Rejection to claim(s) 1-2, 4-10, 13-15, 18-21, and 24-25 over the 101 rejection has been withdrawn in light of the instant amendments. This action is made final.
Response to Arguments
Applicant presents the following argument(s) regarding the previous office action:
Applicant asserts that the prior art fails to teach the claim limitations of independent claims 1 and 26. In particular the prior art fails to teach, “detecting, using a first machine learning process, a group of objects that are captured by the vehicle sensed environment information;… determining a second vehicle location estimate for the vehicle that is based on the detected spatial relationships and according to bird's eye view based location information; wherein the determining comprises using a second machine learning process to apply a mapping between the bird's eye view based location information and the vehicle sensed environment information, wherein the second vehicle location estimate is more accurate than the initial location estimation determination of the vehicle; wherein the second machine learning process differs from the first machine learning process.”
Applicant's arguments filed 05/06/2025 have been fully considered but they are not persuasive.
Regarding applicant’s argument A, the examiner respectfully disagrees. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant merely alleges that the claim is allowable and distinct over the prior art but fails to point out their differences. This does not persuade the examiner that the claims overcome the prior art. For the sake of the applicant the examiner will explain how the prior art teaches the supposed allowable limitations.
Firstly looking at “detecting, using a first machine learning process, a group of objects that are captured by the vehicle sensed environment information.” Ganjineh clearly teaches this in the cited portions. Paragraph [0058] teaches, “in order to, for example, automatically detect or extract objects appearing in the images, a step of semantic segmentation may be performed in order to classify the elements of the images according to one or more of a plurality of different “object classes”…a trained convolutional neural network, such as the so-called “SegNet” or “PSPNet” systems, or modifications thereof, may suitably be used for the segmentation and classification of the image data. The object classes are generally defined within the semantic segmentation algorithm, and may, for example, include object classes such as “road”, “sky”, “vehicle”, “traffic sign”, “traffic light”, “lane marking”, and so on. SegNet and PSPNet are both known algorithms that have been developed specifically for classifying images of road networks.” (Emphasis added.) There is no ambiguity to this. Using a machine learned model the system of Ganjineh detects and classifies objects in the surroundings. Applicant did not point to any difference between what Ganjineh is doing and what they are claiming. Therefore the examiner is left with no choice but to maintain their understanding of the claims and that Ganjineh teaches this limitation.
The second challenged limitation recites, “determining a second vehicle location estimate for the vehicle that is based on the detected spatial relationships and according to bird's eye view based location information; wherein the determining comprises using a second machine learning process to apply a mapping between the bird's eye view based location information and the vehicle sensed environment information, wherein the second vehicle location estimate is more accurate than the initial location estimation determination of the vehicle; wherein the second machine learning process differs from the first machine learning process.” Applicant does not even attempt to explain how Ganjineh fails to teach this. This further allegation is unfounded. Breaking this limitation up the following will be taught.
“determining a second vehicle location estimate for the vehicle that is based on the detected spatial relationships and according to bird's eye view based location information;” This was previously taught by paragraphs [0165], [0181], and [0250]-[0255] and is not specifically challenged by the applicant.
“wherein the determining comprises using a second machine learning process to apply a mapping between the bird's eye view based location information and the vehicle sensed environment information,” It appears this is the crux of applicant’s arguments for this limitation. Again applicant alleges this is not taught by Ganjineh but looking at [0068] it is taught that, “landmark recognition, which may constitute a further semantic segmentation, may (although need not) be performed in a generally similar manner to the first (vehicle environment) semantic segmentation. Typically, a machine learning algorithm is used. For instance, in embodiments, a supervised learning method such as a support vector machine or neural network may be used to perform the landmark recognition semantic segmentation.” (Emphasis added). As the examiner has previously determined the landmark recognition is analogous to the anchors of the claims. Ganjineh clearly is using a neural network separate from the first one to determine landmarks in the local environment. Additionally, as taught in [0120], “the local map representation may be aligned with a corresponding reference map section. Any features that are included in the local map representation, such as the landmark observations and road/lane geometries described above, may in general be used to perform this matching.” This would show that the landmarks are used to match the map akin to the anchor points.
“wherein the second vehicle location estimate is more accurate than the initial location estimation determination of the vehicle;” This is taught by [0255] and is not specifically challenged by the applicant.
“wherein the second machine learning process differs from the first machine learning process.” Ganjineh clearly teaches two separate machine learning processes to determine objects, [0058], and further segment and match, [0068]. These are two distinct neural networks.
In light of the above, the examiner will maintain the rejection in light of the prior art. Ganjineh clearly teaches the claims. A further detailed mapping and explanation can be found below in the section titled, “Claim Rejections – 35 USC 102.”
Claim Objections
Claims 1 and 26 are objected to because of the following informalities: Claim 1 recites, "perform driving related operation" it should recite "perform a driving related operation." Claim 26 is identical to this issue.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4, 15-20, 26, and 30 is/are rejected under 35 U.S.C. 102(a)(1)(a)(2) as being anticipated by Ganjineh (US PG Pub 2020/0098135).
Regarding claim 1, Ganjineh teaches a method for using vehicle to vehicle communicated aerial map information for driving the method comprises: obtaining, by a processor of a vehicle, aerial map segment information related to a segment of an aerial map, and comprising data indicative of an environment of an initial location estimate of the vehicle, ([0162] teaches the extraction of a local map from a map repository, this map would have overhead views as seen in Figure 1 item 16) where the aerial map is generated by a processing of an aerial image that is captured in the environment of the vehicle; ([0095], [0127], and [0169] teach processing aerial images to generate an aerial map)
generating, by the processor, a first local map for the vehicle, (Fig. 6 and [0184] teach the system generating a local map of a vehicle) by processing the aerial map segment information in accordance with vehicle sensed information regarding the environment of the vehicle; (Figs. 6, 16-23; [0159]-[0165], [0183]-[0184], and [0238]-[0239] teach the generation of a first local map area of a vehicle by combining a sensed environment with a corresponding map information from an aerial view point. [0161] teaches the determination of a vehicles environment by determination of at least an image sequence recorded by a camera)
detecting, using a first machine learning process, a group of objects that are captured by the vehicle sensed environment information; ([0058] teaches, “in order to, for example, automatically detect or extract objects appearing in the images, a step of semantic segmentation may be performed in order to classify the elements of the images according to one or more of a plurality of different “object classes”…a trained convolutional neural network, such as the so-called “SegNet” or “PSPNet” systems, or modifications thereof, may suitably be used for the segmentation and classification of the image data. The object classes are generally defined within the semantic segmentation algorithm, and may, for example, include object classes such as “road”, “sky”, “vehicle”, “traffic sign”, “traffic light”, “lane marking”, and so on. SegNet and PSPNet are both known algorithms that have been developed specifically for classifying images of road networks.” Using a machine learned model the system of Ganjineh detects and classifies objects in the surroundings)
detecting spatial relationships between objects of the vehicle and the group of objects; ([0228] teaches the use of point-cloud data to create a ground mesh of objects relative to the main system, the point cloud would have to have spatial relations between the main objects pose and their location as described in [0094]; [0250] further details localizing of the system which would include determining the spatial relationships between the objects of a vehicle and groups of objects detected) and
determining a second vehicle location estimate for the vehicle that is based on the detected spatial relationships and according to bird's eye view based location information; ([0165], [0181], and [0250]-[0255] teach determining a matched location for the vehicle based on the spatial elements and matched information) wherein the determining comprises using a second machine learning process to apply a mapping between the bird's eye view based location information and the vehicle sensed environment information, ([0068] it is taught that, “landmark recognition, which may constitute a further semantic segmentation, may (although need not) be performed in a generally similar manner to the first (vehicle environment) semantic segmentation. Typically, a machine learning algorithm is used. For instance, in embodiments, a supervised learning method such as a support vector machine or neural network may be used to perform the landmark recognition semantic segmentation.” This matches sensed information between the bird’s eye location and the sensed information by determining the landmarks that are anchor points between the two.) wherein the second vehicle location estimate is more accurate than the initial location estimation determination; ([0250]-[0255] further teach using localizing to match vehicle location based on reference map data and extracted spatial data and determining a pose and orientation estimation, which improves the accuracy of the estimate) wherein the second machine learning process differs from the first machine learning process; (Ganjineh clearly teaches two separate machine learning processes to determine objects, [0058], and further segment and match, [0068]. These are two distinct neural networks.)
determining, based on the vehicle sensed information, the bird’s eye view based location information and the second vehicle location estimate, to perform driving related operation within at least the environment of the vehicle; ([0045] teaches the use of HD maps to determine driving elements such as road speed, the shape of the road, lane markings, etc., [0138], [0154]-[0156] and [0178]-[0181] teach the use of the localized mapping to perform autonomous driving functionality) and
performing the driving related operation in relation to autonomous driving. ([0045] teaches the use of HD maps to determine driving elements such as road speed, the shape of the road, lane markings, etc., [0138], [0154]-[0156] and [0178]-[0181] teach the use of the localized mapping to perform autonomous driving functionality)
Claim 26 is substantially similar and would be rejected for the same rationale as recited above.
Regarding claim 4, Ganjineh teaches the method according to claim 1, further comprising sharing the first local map with another computerized system located outside the vehicle. ([0124] teaches sharing the map information with a computerized system located outside of the main vehicle)
Regarding claim 15, Ganjineh teaches the method according to claim 1, wherein the first machine learning process is trained using unsupervised learning ([0058] teaches the use of SegNet for extraction of the objects, SegNet is in an unsupervised manner) and the second machine learning process is trained using supervised learning. ([0068] teaches the training of the second machine learning process as using supervised learning)
Regarding claim 18, Ganjineh teaches the method according to claim 1 wherein the objects of the group of objects comprise at least one of a traffic light and a traffic sign. ([0047] teaches the system can detect at least a traffic sign and traffic light)
Regarding claim 19, Ganjineh teaches the method according to claim 1 wherein the objects of the group of objects comprise at least one of a lane boundary, a road mark, or a lane line. ([0047] teaches the system is able to detect at least one of lane markings, road geometry, or other road information)
Regarding claim 20, Ganjineh teaches the method according to claim 1 wherein the detecting of the spatial relationships comprises detecting the horizon ([0168] teaches the system is able to determine a local horizon for the images) and performing surface estimations. ([0066] teaches the landmarks may have their boundaries determined, which would be equivalent to surface estimation of the objects)
Regarding claim 30, Ganjineh teaches the non-transitory computer readable medium of claim 26, that further stores instructions executable by the processor of the vehicle for relaying the first local map of the vehicle to yet another vehicle. ([0124] teaches the use of a network environment that allows for the transmission of the local map data, the system includes other vehicles that may receive and upload map data; it would be fair to say that using this server based system allows for the relaying of local map data to additional vehicles)
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 31-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ganjineh (US PG Pub 2020/0098135) .
Regarding claim 31, Ganjineh teaches the method according to claim 1, wherein the first machine learning process is trained using unsupervised learning and the second machine learning process is trained using unsupervised learning. ([0058] and [0068] teach two distinct machine learning processes. They are trained in both supervised and unsupervised manners. It would be obvious to train either one in either way as this is merely a reversal or rearrangement of parts. In a reversal of parts the courts found In re Gazda, 219 F.2d 449, 104 USPQ 400 (CCPA 1955), that merely reversing the way the parts were arranged, i.e. which network is trained in which way and is applied first, is an obvious modification. The same is true for a rearrangement of parts, In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975), found that the rearrangement of parts amounts to a design choice. Applicant has not shown that the why this would be critical to be done in this manner. Therefore it is an obvious modification)
Regarding claim 32, Ganjineh teaches the method according to claim 1, wherein the first machine learning process is trained using supervised learning and the second machine learning process is trained using unsupervised learning. ([0058] and [0068] teach two distinct machine learning processes. They are trained in both supervised and unsupervised manners. It would be obvious to train either one in either way as this is merely a reversal or rearrangement of parts. In a reversal of parts the courts found In re Gazda, 219 F.2d 449, 104 USPQ 400 (CCPA 1955), that merely reversing the way the parts were arranged, i.e. which network is trained in which way and is applied first, is an obvious modification. The same is true for a rearrangement of parts, In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975), found that the rearrangement of parts amounts to a design choice. Applicant has not shown that the why this would be critical to be done in this manner. Therefore it is an obvious modification)
Regarding claim 33, Ganjineh teaches the method according to claim 1, wherein the first machine learning process is trained using supervised learning and the second machine learning process is trained using supervised learning. ([0058] and [0068] teach two distinct machine learning processes. They are trained in both supervised and unsupervised manners. It would be obvious to train either one in either way as this is merely a reversal or rearrangement of parts. In a reversal of parts the courts found In re Gazda, 219 F.2d 449, 104 USPQ 400 (CCPA 1955), that merely reversing the way the parts were arranged, i.e. which network is trained in which way and is applied first, is an obvious modification. The same is true for a rearrangement of parts, In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975), found that the rearrangement of parts amounts to a design choice. Applicant has not shown that the why this would be critical to be done in this manner. Therefore it is an obvious modification)
Claim(s) 5-8, and 27-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ganjineh (US PG Pub 2020/0098135) in view of Rech (US PG Pub 2022/0316909).
Regarding claim 5, Ganjineh teaches the method according to claim 1, further comprising second local map that pertains information about the environment. ([0119]-[0122], [0124]-[0125], [0162]-[0163] teach receiving a reference map of the area. This map is analogous to the second local map as it covers the mapped area and would provide the necessary information)
Ganjineh does not teach receiving from another vehicle and using a vehicle to vehicle communication.
However, Rech teaches “receiving from another vehicle and using a vehicle to vehicle communication.” ([0027], [0079], and [0112] teach the system of vehicle to vehicle communication. In particular Rech teaches a vehicle system that can communicate map data of an object and anchor points in the environment.)
It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Ganjineh and Rech; and have a reasonable expectation of success. Both relate to mapping systems and sensor systems of a vehicle. The system of Rech allows for a vehicle to validate sensed information received from a wireless V2V or V2X system, [0027]. The use of such a system would allow for quicker data processing as the system does not have to use a middle server to transmit the data. Ganjineh teaches in [0124] the idea of sending data and receiving data from other vehicles in a road network. Rech provides such a system can be used with the need for a complicated communication infrastructure, rather the system can use only the vehicles themselves.
Claim 27 is substantially similar and would be rejected for the same rationale as recited above.
Regarding claim 6, Ganjineh teaches the method according to claim 5, further comprising validating the first local map based on the second local map, ([0026]-[0028], [0119]-[0122], [0162]-[0163], [0183] and [0250] teach matching the first local map to the reference map, this allows for the verification of the first local map data when compared to the reference map) wherein the validating is based on one or more locations of one or more anchors that appear in the first local map and one or more locations of one or more anchors that appear in the second local map; ([0119]-[0120] teach validating the map information by matching so called anchor or landmarks spots between the maps)
Claim 28 is substantially similar and would be rejected for the same rationale as recited above.
Regarding claim 7, Ganjineh teaches the method according to claim 5, therein the second local map is also indicative of other anchors within another environment of the other vehicle and of locations of the other anchors. ([0026] teaches receiving a reference map of an area, this reference map is indicative of anchor points within a local area; [0063] teaches there may be multiple types of landmark points, i.e. anchors in an environment; [0119]-[0122] and [0124] teach the receiving of map data from other vehicles, this map data would be formatted the same way as it is generated by the main vehicle which would mean it has indications of anchor points)
Claim 29 is substantially similar and would be rejected for the same rationale as recited above.
Regarding claim 8, Ganjineh teaches the method according to claim 7 comprising relaying the first local map of the vehicle to yet another vehicle. ([0124] teaches the use of a network environment that allows for the transmission of the local map data, the system includes other vehicles that may receive and upload map data; it would be fair to say that using this server based system allows for the relaying of local map data to additional vehicles)
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ganjineh in view of Kim (US PG Pub 2020/0370894).
Regarding claim 13, Ganjineh teaches the method according to claim 1.
Ganjineh does not teach wherein the detecting of the spatial relationship comprises fusing vehicle sensed visual information with vehicle sensed radar information.
However, Kim teaches “wherein the detecting of the spatial relationship comprises fusing vehicle sensed visual information with vehicle sensed radar information.” ([0051] teaches the use of a camera and radar sensor to determine an object’s location in relation to the vehicle)
It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Ganjineh with Kim; and have a reasonable expectation of success. All relate to the determination of relationships between a vehicle and objects in an environment. Ganjineh uses cameras to determine a 3D point cloud around the vehicle. Kim does a similar development but also uses Radar to add to this. In Kim [0029]-[0031] they teach that the use of the radar to determine the distance in addition to a camera allows for an accurate and complete determination of a location. It would be obvious to incorporate such a system as it prevents errors that may occur with camera based perception alone, and could provide more data for determining an accurate location.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Klomp (US PG Pub 2011/0090337) teaches a method according to the invention gene rates an aerial image mosaic viewing a larger area than a single image from a camera can provide using a combination of computer vision and photogrammetry. The aerial image mosaic is based on a set of images acquired from a camera. Selective matching and cross matching of consecutive and non-consecutive images, respectively, are performed and three dimensional motion and structure parameters are calculated and implemented on the model to check if the model is stable. Thereafter the parameters are globally optimised and based on these optimised parameters the serial image mosaic is generated. The set of images may be limited by removing old image data as new images are acquired. The method makes it is possible to establish images in near real time using a system of low complexity and small size, and using only image information.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS STRYKER whose telephone number is (571)272-4659. The examiner can normally be reached Monday-Friday 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.S./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665