Prosecution Insights
Last updated: April 19, 2026
Application No. 18/352,578

MAP CREATION AND LOCALIZATION FOR AUTONOMOUS DRIVING APPLICATIONS

Non-Final OA §101§102§103
Filed
Jul 14, 2023
Examiner
TROOST, AARON L
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
542 granted / 727 resolved
+22.6% vs TC avg
Moderate +10% lift
Without
With
+9.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
37 currently pending
Career history
764
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 727 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 19 December 2025 has been entered. Status of Claims Claims 1-7, 10-16, 18-21, 23, and 24 of US Application No. 18/352,578 are currently pending and have been examined. Applicant amended claims 1, 5, 6, 11-13, 15, 16, 19, and 21, added claims 23 and 4, and canceled claims 17 and 22. Applicant previously canceled claims 8 and 9. Response to Arguments/Amendments Applicant’s arguments, see REMARKS, filed 19 December 2025, regarding the rejections of claims 1-7, 10-16, and 18-22 under 35 U.S.C. 101 have been fully considered and are partially persuasive. Regarding claim 1 and the claims depending from claim 1, the previous rejections under § 101 are withdrawn. Applicant amended claim 1 to recite additional elements that apply or use the judicial exception(s) in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Regarding claims 11 and 19, and the claims depending from claims 11 and 19, the previous rejections under § 101 are maintained for the reasons indicated in the § 101 rejections below. Applicant’s arguments regarding the previous rejections of claims 1-4, 6, 7, 10-16, and 18-21 under 35 U.S.C. 102 have been fully considered but are not persuasive. The previous rejections are maintained. Upon further review of Li et al. (US 2020/0109954 A1, “Li”), the Examiner notes that Li discloses: [0371] In some embodiments, having been collected via one or more sensors on-board the vehicle, the data is transmitted to one or more processors off-board the vehicle for generating a 3D map at 303. As previously noted, the one or more processors off-board the vehicle can be resident on a remote server, such as a cloud server with a cloud computing infrastructure. The transmission of the data to the remote server can be done using a high speed transmission network. For example, the transmission network herein may include a wireless network including a 4G or a 5G network. In this case, the data including image data and Lidar-related data can be formatted into a 4D data packet and transmitted via a 4G network to the remote server, such as exemplarily shown in FIG. 4. [0372] In an example, the data from the plurality of vehicles are provided at a variable frequency. The frequency of data collection and/or communication by the vehicles may be determined with aid of one or more processors. The one or more processors may automatically determine the frequency of data collection and/or communication, and maintain and/or vary the frequency. User input may or may not be utilized. In some instances, a user may determine the frequency level, or rules used by the processors to determine the frequency level. Alternatively, the one or more processors may be pre-programmed without requiring user input. The frequency may be determined based on one or more conditions, such as those described in greater detail as follows. In other words, data collection and communication to a remote server for generating a 3D map may be performed at a predetermined frequency. The frequency of collection is “a time threshold . . . between the obtaining the first sensor data and obtaining the second sensor data”. Therefore, the previous rejections are maintained. Claim Objections Claims 1, 11, and 19 are objected to because of the following informalities: Claim 1 recites “generate, based at least on the second sensor data obtained using the one or more sensors of the first machine, second data representative of the second location of the first machine one or more second locations of one or more second landmarks as represented by the second sensor data” but, for clarity, should recite – generate, based at least on the second sensor data obtained using the one or more sensors of the first machine, second data representative of the second location of the first machine and one or more second locations of one or more second landmarks as represented by the second sensor data –. Claims 11 and 19 recites “to cause a generation of a map, wherein the map is sent to one or more second machines in order to cause the one or more second machine to navigate according to the map” but, for consistency, should recite – to cause a generation of a map, wherein the map is sent to one or more second machines in order to cause the one or more second machines to navigate according to the map –. Appropriate correction is required. Claim Interpretation The Examiner interprets the claims 11 and 19 recitations “to cause a generation of a map, wherein the map is sent to one or more second machines to cause the one or more second machine[s] to navigate according to the map” as non-limiting. Claim 11 is a system claim where the only positively recited element of the system is the one or more processors. The “remote system” and the “one or more second machines” are not positively recited as an element of the system. Similarly, claim 19 is directed to one or more processors. Again, the “remote system” and the “one or more second machines” are not positively recited and their functions are not performed by the one or more processors. Therefore, the Examiner interprets the recitation “to cause a generation of a map, wherein the map is sent to one or more second machines to cause the one or more second machine[s] to navigate according to the map” as an intended use of the first and second data, where the claims do not actually require generation of the map or navigation according to the map to occur. Therefore, the remote system for map generation or machine navigation using the map are interpreted as non-limiting recitations. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-16, 18-21, and 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In January, 2019 (updated October 2019), the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1-7, 9-16, and 18-20 are directed toward non-statutory subject matter, as shown below: STEP 1: Does claim 11 and 19 fall within one of the statutory categories? Yes. Independent claim 11 is directed toward a machine and claim 19 is directed toward a machine which fall within one of the statutory categories. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? Yes, independent claims 11 and 19 are directed to an abstract idea. With regard to STEP 2A (PRONG 1), a claim that recites an abstract idea, a law of nature, or a natural phenomenon is directed to a judicial exception. the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). See the 2019 Revised Patent Subject Matter Eligibility Guidance. With respect to mental processes, the courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. Independent claim 11 recites “generate, based at least on first sensor data obtained using one or more sensors of a first machine while located at a first location, first data representative of the first location of the first machine and one or more first locations of one or more first landmarks as represented by the first sensor data”, “select, based on at least one of a time threshold or a distance threshold having occurred from the obtaining of the first sensor data using the one or more sensors, second sensor data obtained using the one or more sensors of the first machine while located at a second location” and “generate, based at least on the second sensor data obtained using the one or more sensors of the first machine, second data representative of the second location of the first machine [and] one or more second locations of one or more second landmarks as represented by the second sensor data”. Independent claim 19 recites substantially similar limitations as claim 11. All of these limitations may be performed mentally. For example, a person first and second sensor data may generate location information using the data. The person may select a time or distance threshold to be used for generating a next set of location information. Therefore, the claims 11 and 19 recite an abstract idea. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? No, claims 11 and 19 do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), even when a judicial element is recited in the claim, an additional claim element(s) that integrates the judicial exception into a practical application of that exception renders the claim eligible under §101. The guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. In the instant application, claims 11 and 19 do not recite additional elements that integrate the judicial exception into a practical application of that exception. Claim 11 recites the additional elements “one or more processors” and “send, to a remote system, the first data and the second data to cause a generation of a map, wherein the map is sent to one or more second machines to cause the one or more second machine[s] to navigate according to the map”. Claim 19 recites the additional elements “one or more processors comprising processing circuitry to: . . . send, to a remote system, first data and the second data to cause a generation of a map, wherein the map is sent to one or more second machines in order to cause the one or more second machine to navigate according to the map”. As noted above, merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea is indicative that the judicial exception has not been integrated into a practical application. The one or more processing units, given their broadest reasonable interpretation, encompass a computer. Using the processing units to perform the claimed determinations is merely using a computer as a tool to perform abstract ideas. Also as noted above, adding insignificant extra-solution activity to the judicial exception is indicative that the judicial exception has not been integrated into a practical application. Insignificant extra-solution activity includes data gathering and outputting. See MPEP 2106.05(g). Using the processing units to obtain sensor data is data gathering. Using the processing units to send data to a remote system is outputting data. Therefore, these additional elements just add insignificant extra-solution activity to the judicial exception. As indicated above in the Claim Interpretation section, “to cause a generation of a map, wherein the map is sent to one or more second machines to cause the one or more second machine[s] to navigate according to the map” is interpreted as an intended use of the first and second data, where the claims do not actually require generation of the map or navigation according to the map to occur. Therefore, this recitation is non-limiting. Therefore, the remote system for map generation or machine navigation using the map are not additional elements that apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Therefore, claims 11 and 19 do not recite additional elements that integrate the judicial exception into a practical application of that exception. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, claims 11 and 19 do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claims 11 and 19 does not recite any specific limitation or combination of limitations that are not well-understood, routine, conventional (WURC) activity in the field. Using a generic computer to perform generic computing functions is WURC activity. Generic computing functions include 1) performing repetitive calculations, 2) receiving, processing, and storing data, 3) electronically scanning or extracting data from a physical document, 4) electronic recordkeeping, 5) automating mental tasks, and 6) receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II). The one or more processors are recited at a high level of generality and, given their broadest reasonable interpretation, represent a generic computer. Sending data using the processing units is merely receiving or transmitting data over a network. Further, obtaining data from different types of sensors while a machine is navigating a path is known in the art. See rejections under § 102 below. The additional elements, both individually and in combination, are well-understood, routine, conventional activity in the field. CONCLUSION Thus, since claims 11 and 19 (a) are directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that claims 1 and 8 are directed towards non-statutory subject matter. Claims 12-16, 18, 20, 21, and 24 do not recite any new additional elements that integrate the judicial exception into a practical application or that amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6, 7, 10-16, 18-21, 23, and 24 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li et al. (US 2020/0109954 A1, “Li”). Regarding claim 1, Li discloses map generation systems and teaches: causing a first machine to navigate within an environment using a map corresponding to the environment (remote server may update the previously-stored map and share these changes to other vehicles for navigation or path planning – see at least ¶ [0450]), wherein the map is generated, at least, by: obtaining, while a second machine is navigating along a path of the environment, a first sensor data using one or more sensors of the second machine and second sensor data using the one or more sensors of the second machine (sensors 1012 that may sense information relating to the environment outside the vehicle – see at least Fig. 1 and ¶ [0322]; one or more sensors carried by the vehicle may include proximity sensors, such as radar and lidar – see at least ¶ [0336]; data can be received from different types – see at least ¶ [0336]; first set of sensors may include a camera, a radar unit, and a lidar unit – see at least ¶ [0397]); generating first data representative of one or more first locations of one or more first landmarks represented by the first sensor data and a first location of the second machine when obtaining the first sensor data (sensors may detect the presence of objects within the environment and sensor data may include position information with regard to the identified objects within the environment – see at least ¶ [0334]); selecting, using at least one of a time threshold or a distance threshold between the obtaining the first sensor data and obtaining the second sensor data, the second sensor data (data from the plurality of vehicles are provided at a variable frequency – see at least ¶ [0371]-[0372]). generating, based at least on the selecting the second sensor data, second data representative of one or more second locations of one or more second landmarks represented by the second sensor data and the second location of the second machine when obtaining the second sensor data (sensors may detect the presence of objects within the environment and sensor data may include position information with regard to the identified objects within the environment – see at least ¶ [0334]); and sending, to a remote system, the first data and the second data for generating the map corresponding to the environment (data package 414 may be sent to one or more servers 415, 416, 417 – see at least Fig. 4 and ¶ [0382]; map generated based on data collected by various types of sensors on-board a vehicle – see at least ¶ [0003]; data associated with the map may be transmitted to one or more processors off-board the vehicle for map generation – see at least ¶ [0006], [0105]). Regarding claim 2, Li further teaches: wherein at least one of: a first landmark of the one or more first landmarks is a same landmark as a second landmark of the one or more second landmarks; or a third landmark of the one or more first landmarks is a different landmark as compared to a fourth landmark of the one or more second landmarks (there may be redundant or overlapping in the data captured by the sensors about the surrounding environment – see at least ¶ [0373]). Regarding claim 3, Li further teaches: the generating the first data is based at least on encoding the one or more first locations of the one or more first landmarks; and the generating the second data is based at least on encoding the one or more second locations of the one or more second landmarks (label information may be associated with the objects in the map – see at least ¶ [0042]; label information may uniquely identify or represent the object, e.g., position/geo-spatial coordinates – see at least ¶ [0334]). Regarding claim 4, Li further teaches: determining, based at least on the one or more first locations of the one or more first landmarks, one or more first three-dimensional (3D) locations of the one or more first landmarks; and determining, based at least on the one or more second locations of the one or more second landmarks, one or more second 3D locations of the one or more second landmarks, wherein the first data is representative of the one or more first 3D locations of the one or more first landmarks and the second data is representative of the one or more second 3D locations of the one or more second landmarks (label information may be associated with the objects in the map – see at least ¶ [0042]; label information may uniquely identify or represent the object, e.g., position/geo-spatial coordinates – see at least ¶ [0334]). Regarding claim 6, Li further teaches: obtaining motion data representative of a motion of the second machine, the motion including at least one of acceleration of the second machine, a velocity of the second machine, or a direction of travel associated with the second machine between the first sensor data being obtained and the second sensor data being obtained (internal sensors may be useful for collecting data of the vehicle, including, velocity, acceleration, – see at least ¶ [0331]); and determining the second location based on the at least one of the acceleration, the velocity, or the direction of travel (position information may include detection and/or measurement of movement of the vehicle – see at least ¶ [0331]). Regarding claim 7, Li further teaches: wherein the map is further generated by: generating, based at least on the first data, a first layer of the map using the one or more first locations of the one or more first landmarks; generating, based at least on the second data, a second layer of the map using the one or more second locations of the one or more second landmarks (the remote server 905 may generate a 3D map based on the lidar data and camera data – see at least ¶ [0440]; i.e., data from each different sensor is a separate layer added to the overall 3D map). Regarding claim 10, Li further teaches: wherein at least one of the one or more first landmarks or the one or more second landmarks include: a lane divider; a road boundary; a sign; a pole; a wait condition; a vertical structure; a road user; a static object; or a dynamic object (three-dimensional map may include traffic signs, traffic lights, billboards, roads, lane lines, structures – see at least ¶ [0019], [0370]; objects in the map may comprise dynamic objects and/or static objects – see at least ¶ [0410]; sign posts, moving cars, pedestrians, barricades – see at least ¶ [0410]). Regarding claims 11 and 19, Li discloses map generation systems and teaches: one or more processing units (processors 103, 1013, 1023 – see at least Fig. 1) to: generate, based at least on first sensor data obtained using one or more sensors of a first machine while located at a first location, first data representative of the first location of the first machine and one or more first locations of one or more first landmarks as represented by the first sensor data (sensors 1012 that may sense information relating to the environment outside the vehicle – see at least Fig. 1 and ¶ [0322]; one or more sensors carried by the vehicle may include proximity sensors, such as radar and lidar – see at least ¶ [0336]; data can be received from different types – see at least ¶ [0336]; first set of sensors may include a camera, a radar unit, and a lidar unit – see at least ¶ [0397]; sensors may detect the presence of objects within the environment and sensor data may include position information with regard to the identified objects within the environment – see at least ¶ [0334]); select, based on at least one of a time threshold or a distance threshold having occurred from the obtaining of the first sensor data using the one or more sensors, second sensor data obtained using the one or more sensors of the first machine while located at a second location (data from the plurality of vehicles are provided at a variable frequency – see at least ¶ [0371]-[0372]); generate, based at least on the second sensor data obtained using the one or more sensors of the first machine, second data representative of the second location of the first machine [and] one or more second locations of one or more second landmarks as represented by the second sensor data (sensors 1012 that may sense information relating to the environment outside the vehicle – see at least Fig. 1 and ¶ [0322]; one or more sensors carried by the vehicle may include proximity sensors, such as radar and lidar – see at least ¶ [0336]; data can be received from different types – see at least ¶ [0336]; first set of sensors may include a camera, a radar unit, and a lidar unit – see at least ¶ [0397]; sensors may detect the presence of objects within the environment and sensor data may include position information with regard to the identified objects within the environment – see at least ¶ [0334]); and send, to a remote system, the first data and the second to cause a generation of a map, wherein the map is sent to one or more second machines to cause the one or more second machine[s] to navigate according to the map (data collected by processors on-board the vehicle may be transmitted to a remote server for generating a 3D map or updating an existing map – see at least ¶ [0337], [0341]; on-board processors may detect objects within the environment and determine positional information relating to the objects before being communicated to a remote server – see at least ¶ [0341]; the collected data may be transmitted for generating or updating a 3D map – see at least Fig. 9 and ¶ [0440]-[0441]; remote server may update the previously-stored map and share these changes to other vehicles for navigation or path planning – see at least ¶ [0450]). Regarding claim 12, Li further teaches: wherein the first data is generated based at least on encoding the first location of the first machine and the one or more first locations of the one or more first landmarks; and the second data is generated based at least on encoding the second location of the first machine and the one or more second locations of the one or more second landmarks (label information may be associated with the objects in the map – see at least ¶ [0042]; label information may uniquely identify or represent the object, e.g., position/geo-spatial coordinates – see at least ¶ [0334]; sensors may sense information relating to the vehicle itself, thereby obtaining location and orientation information of the vehicle – see at least ¶ [0322]; positional information of the object may be relative to the vehicle – see at least ¶ [0334]). Regarding claim 13, Li further teaches: wherein the one or more processors are further to: determine, based at least on the one or more first locations of the one or more first landmarks, one or more first three-dimensional (3D) locations of the one or more first landmarks (label information may be associated with the objects in the map – see at least ¶ [0042]; label information may uniquely identify or represent the object, e.g., position/geo-spatial coordinates – see at least ¶ [0334]) and determine, based at least on the one or more second locations of the one or more second landmarks, one or more second 3D locations of the one or more second landmarks (label information may be associated with the objects in the map – see at least ¶ [0042]; label information may uniquely identify or represent the object, e.g., position/geo-spatial coordinates – see at least ¶ [0334]), wherein the first data is representative of the first location of the first machine and the one or more first 3D location of the one or more first landmarks and the second data is representative of the second location of the first machine and the one or more second 3D locations of the one or more second landmarks (label information may be associated with the objects in the map – see at least ¶ [0042]; label information may uniquely identify or represent the object, e.g., position/geo-spatial coordinates – see at least ¶ [0334]; sensors may sense information relating to the vehicle itself, thereby obtaining location and orientation information of the vehicle – see at least ¶ [0322]; positional information of the object may be relative to the vehicle – see at least ¶ [0334]). Regarding claim 14, Li further teaches: wherein the one or more processors are further to: determine, based at least on the first type of sensor data, at least one of one or more poses associated with the one or more first landmarks or one or more geometries associated with the one or more first landmarks, wherein the first data is further representative of the at least one of the one or more poses associated with the one or more first landmarks or the one or more geometries associated with the one or more first landmarks (objects detected by external sensors may be labeled, where the label information includes attitude information relative to orthogonal translation axes – see at least ¶ [0334]). Regarding claim 15, Li further teaches: determine, based at least on a motion of the first machine, at least one of a translation or a rotation of the first machine relative to the first location (position information may include spatial location and attitude or pose relative to axes of rotation – see at least ¶ [0331]); and determine the second location of the first machine based on at least one of the rotation or the translation with respect to the first location (position information may include spatial location and attitude or pose relative to axes of rotation – see at least ¶ [0331]). Regarding claim 16, Li further teaches: the map stores the one or more first locations of the one or more first landmarks and the one or more second locations of the one or more second landmarks (the remote server 905 may generate a 3D map based on the lidar data and camera data – see at least ¶ [0440]). Regarding claim 18, Li further teaches: wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (external device may be a remote server 103 – see at least ¶ [0337]; processors off-board the vehicle for map generation may be located at a remote server, for example a cloud server with cloud computing infrastructure – see at least ¶ [0006]). Regarding claim 20, Li further teaches: wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (external device may be a remote server 103 – see at least ¶ [0337]; processors off-board the vehicle for map generation may be located at a remote server, for example a cloud server with cloud computing infrastructure – see at least ¶ [0006]). Regarding claim 23, Li further teaches: determining, based at least on the first location of the second machine and motion data associated with the second machine, the second location of the second machine that is defined relative to the first location of the second machine, wherein the second data further defines the second location of the second machine relative to the first location of the second machine (internal sensors may be useful for collecting data of the vehicle, including, velocity, acceleration, – see at least ¶ [0331]; position information may include detection and/or measurement of movement of the vehicle – see at least ¶ [0331]). Regarding claim 24, Li further teaches: wherein the second sensor data is selected based on at least one of: a distance between the second location and the first location being equal to or greater than the distance threshold; or a time period between when the second sensor data was obtained and the first sensor data was obtained being equal to or greater than the time threshold (data from the plurality of vehicles are provided at a variable frequency – see at least ¶ [0371]-[0372]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Englard et al. (US 2019/0113927 A1, “Englard”). Regarding claim 5, Li fails to teach but Englard discloses controlling a vehicle using cost maps and teaches: determining, based at least on processing the first sensor data using one or more first neural networks, the one or more first locations of the one or more first objects; and determining, based at least on processing the second sensor data using one or more second neural networks, the one or more second locations of the one or more second objects (object tracking may be performed utilizing a neural network or other machine learning model – see at least ¶ [0091]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the map generation system of Li to provide for determining locations of objects using a known technique, such as neural networks, as taught by Englard, with a reasonable expectation of success because the neural network may track object locations over time (Englard at ¶ [0091], Li at ¶ [0334]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON L TROOST whose telephone number is (571)270-5779. The examiner can normally be reached Mon-Fri 7:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at 313-446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON L TROOST/Primary Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Jul 14, 2023
Application Filed
Apr 20, 2025
Non-Final Rejection — §101, §102, §103
Jul 24, 2025
Response Filed
Nov 08, 2025
Final Rejection — §101, §102, §103
Dec 19, 2025
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600193
SYSTEM AND METHOD FOR PROVIDING RACE PREPARATION MODES ON BATTERY ELECTRIC VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12594858
ELECTRIC VEHICLE BATTERY SYSTEM CONTROL STRATEGY INCORPORATING THERMAL MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597355
NAVIGATION CONTROL SYSTEM AND MARINE VESSEL
2y 5m to grant Granted Apr 07, 2026
Patent 12594804
MOBILE ROBOT MOTION CONTROL METHOD AND MOBILE ROBOT
2y 5m to grant Granted Apr 07, 2026
Patent 12589843
CONTROL DEVICE FOR CONTROLLING A WATERCRAFT, WATERCRAFT HAVING SUCH A CONTROL DEVICE, AND METHOD FOR CONTROLLING A WATERCRAFT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
84%
With Interview (+9.9%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 727 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month