Prosecution Insights
Last updated: April 19, 2026
Application No. 18/242,328

UNIFIED BOUNDARY MACHINE LEARNING MODEL FOR AUTONOMOUS VEHICLES

Non-Final OA §102§103
Filed
Sep 05, 2023
Examiner
WEISFELD, MATTHIAS S
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aurora Operations, Inc.
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
78%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
103 granted / 174 resolved
+7.2% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
30 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 174 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/10/2025 have been fully considered but they are not persuasive. In regards to independent claim 1, Applicant argues Ma (US 20210383138) does not teach each and every features of the claim and the rejection is reading language out of claim and therefore defective. Applicant argues the cited aspects of Ma do not discloses a trained machine learning model that integrates detection of boundaries with detection of sidedness attributes or a machine learning model capable of detecting any attribute associated with sidedness, or even an attribute that is specifically for a boundary. Applicant argues the cited portions of Ma provide no support for determining an active lane requiring detecting a sidedness attribute. Instead, Applicant argues Ma discloses the use of machine learning models to determine open and closed lanes and boundaries may be generated by connecting multiple objects, but there is no disclosure of sidedness for these boundaries, and at best the open or closed statuses are for a lane, not a boundary. Applicant argues active lane detection as recited in Ma does not inherently require the detection of both boundaries and sidedness attributes, for example, when multiple active lanes exist side by side, boundaries between those lanes could be defined having no sidedness attributes, or sidedness that indicates both are drivable, which is therefore not inherently required or particularly for the boundaries. Applicant continues, the identification of the machine learning model of Ma is not the same as the outputting boundaries or attributes for those boundaries and the plain reading of the cited passages would be that the boundaries and attributes would be wholly internal to the machine learning models. Therefore, Applicant concludes the rejection of the claim should be withdrawn. However, no language has at any point within Examination been read out of the claim, instead each and every limitation, including the claim as a whole, in light of the Applicant’s disclosure as filed has been given its broadest reasonable interpretation. Ma discloses a perception engine includes a machine learning model that identifies information of the environment including objects and lane open and closed status, as well as lane shape by bounding the lanes determined by the machine learning model or a combination of models. The machine learning model then outputs this information to update maps for use by further users and to navigate the vehicle. Determinations are made to generate shapes of lanes using their boundaries and which lanes are open or closed, where boundaries may be either traditional lane boundaries or constructed by connecting multiple observed objects. See at least [0024], [0056], [0058], [0059], and [0061]. By explicitly determining that a particular lane is open or closed based on the boundaries, Ma necessarily determines that the sides of the boundaries defining the lanes are also open or closed. As shown in Figure 4, reproduced below, objects are detected, used to construct lane boundaries for an alternative lane shape, and the open side and closed side of each boundary is determined. This integrates detection of boundaries with detection of sidedness indicating which side of a boundary allows a vehicle to travel by being open and which does not by being closed. The vehicle is then guided through these sections activating lane segments as required. While the open and closed status is described explicitly as for a lane, the status is also necessarily for a boundary. As each and every lane is analyzed, both those pre-existing and constructed boundaries of Ma, are assigned to bound open and closed lanes, necessarily applying closed sides and open sides for each and every lane boundary. Disassociating such a status violates the fundamental operation of the reference, such that the shape of each lane can simple never be determined properly without also determining the sides of each boundary. PNG media_image1.png 768 543 media_image1.png Greyscale Despite the Applicant’s hypothetical, as Ma analyzes every lane boundary and every lane, there does not exist a case in which Ma does not assign sidedness to any boundary constructed, pre-existing, or otherwise. Ma does not ignore any boundary and deliberately not determine sidedness, and in fact, as the Applicant themselves has suggested, Ma may determine that both sides of a boundary are open when lanes are side by side, which is a determination of the boundary’s sidedness. Even were this not the case, Ma still does contain explicit recitations as explained above and previously of determining the open and closed status of lanes based on the presence of their boundaries which is sidedness of each lane boundary. Further, mere identification by a machine learning model was never asserted to be an output of the machine learning model, however, Ma does nonetheless quite clearly output the results of identification of the machine learning model including identified lane boundaries, the open and closed status of lanes defined by those boundaries, and alternative lane shapes, and does this not merely internally within the machine learning model, but provides it, at least as explicitly disclosed in the same citations pointed to by the Applicant as plainly read, through at least updating map information. Still further, precisely how Ma determines which sides of each boundary to drive while determining a lane is open or closed, but does not determine the sidedness of the boundary, while constructing the lane from the lane’s boundaries, is unclear from the Applicant’s interpretation of the reference. When lanes are constructed by determining their boundaries, any determination that a lane is open also necessarily determines that the lane boundaries on that side of the lane have an open attribute. Any determination that a lane is closed where the lane is constructed by their lane boundaries necessarily determines that the lane boundaries have a sidedness attribute of that side being closed. Removing such features from Ma, as the Applicant appears to suggest, would cause the reference to be entirely non-functional, merely determining that data exists, but entirely unable to interrelate that data, let alone even determine a shape of lane or control a vehicle to progress into an open lane section or prevent a vehicle from traveling into a closed lane section. Instead, by the Applicant’s interpretation, lanes are determined as open or closed, but they simply have no clear boundaries, and despite what is explicitly disclosed in Ma, the relevant vehicles may travel freely, as dangerously as the Applicant may wish, and cross boundaries at whichever point, for whichever reason, without method or order. This is simply not the case, and as such, the Applicant’s interpretation of the reference simply cannot be taken because it is wholly illogical and non-functional, and one of ordinary skill in the art would not have interpreted the reference in such a way. As such, this argument is unpersuasive, the rejection of record does not read out any limitation, but instead fully considers each and every limitation within the claim, and is not defective. Applicant argues independent claims 8 and 19 recite similar features to independent claim 1 and therefore are distinguishable from the prior art of record. However, this argument is unpersuasive for the same reasons as given above. In further regards to claim 8, Applicant argues Ma does not disclose the recited detection of boundaries with the detection of attributes for at least a subset of the plurality of perceived boundaries, as well as determining one or more active lanes using the plurality of perceived boundaries. Applicant argues the rejection appears to conflate the two concepts, arguing that a detection of a lane must do so based on determining boundaries of the lane, which has no support within the rejection, and that conflating the distinct concepts of the claim effectively reads out the claim language and is improper. Applicant argues for at least these reasons, when properly construed, the rejection of claim 8 should be withdrawn. However, as the rejection of record does not conflate these concepts, and instead merely addresses the distinct, but closely related concepts. As “active” an exceptionally broad term, an equally broad application of prior art is appropriate. As explained above, Ma discloses a perception engine includes a machine learning model that identifies information of the environment including objects and lane open and closed status, as well as lane shape by bounding the lanes determined by the machine learning model or a combination of models. The machine learning model then outputs this information to update maps for use by further users and to navigate the vehicle. Determinations are made to generate shapes of lanes using their boundaries and which lanes are open or closed, where boundaries may be either traditional lane boundaries or constructed by connecting multiple observed objects. This integrates detection of boundaries with detection of sidedness by determining that a lane is open and bounded by lane boundaries where at least one side of the lane boundary has an open attribute. An open lane is a lane that is capable of driving and therefore active. Then the vehicle is then guided through these sections activating lane segments by an additional interpretation of “active” as open lane segments caused by planning a path through the lane segments and traveling through the planned path. As such, at no point are these concepts conflated, but instead the concepts are given their full weight and value as related concepts. As such, this argument is unpersuasive, no language of the claim has been read out, and the rejection is not improper. Applicant argues the rejections of the dependent claims should be withdrawn by virtue of their dependency. This argument is unpersuasive for the same reasons as given above. In regards to claim 21, Applicant argues the recited features of the claim are not to be found within Ma and Ma does not disclose any machine learning model capable of outputting boundaries or attributes such that a downstream consumer would actually use those outputs or the recited augmented digital map. Therefore, Applicant concludes claim 21 is allowable. However, Ma explicitly teaches at least in [0061], as was cited by the Applicant, that the output of the machine learning model is used to update map information and used by consumers of the updated map information and the vehicle for navigation. This is precisely what is required by the claim as further explained in the rejection below. Therefore, this argument is unpersuasive. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 7-11, 14-19, and 21 are rejected under 35 U.S.C. 102(a) as being anticipated by Ma et al. (US 20210383138). In regards to claim 1, Ma teaches an autonomous vehicle control system for an autonomous vehicle, comprising: (Fig 1,5.) one or more processors; ([0069], [0070] processors perform operations.) and memory storing instructions that, when executed by the one or more processors, cause the autonomous vehicle control system to: ([0069], [0070] processors perform operations stored in memory to control vehicle.) receive perception data from at least one perception sensor configured to sense a roadway upon which the autonomous vehicle is disposed; ([0020], [0024], [0025] vehicle and vehicle’s perception engine may receive sensor data from depth sensors, image sensors, and other perception sensors.) generate a plurality of perceived boundaries for the roadway by processing the perception data using a trained machine learning model that integrates detection of boundaries with detection of sidedness attributes for at least a subset of the plurality of perceived boundaries, wherein a sidedness attribute indicates, for a respective perceived boundary having opposing first and second sides, a side from among the first and second sides of the respective perceived boundary that allows for vehicle travel on which the autonomous vehicle is allowed to be disposed, and wherein the trained machine learning model is configured to receive the perception data, and in response to receiving the perception data; ([0025] objects perceived by the sensors in the environment of the vehicle may be detected and classified using machine learning model, including set as a safety class. [0027], [0029] when at least one object is classified as a safety object, lane closure analysis is triggered, analyzing the current lane and any adjacent lanes to determine if the lane is closed or open, where the lane is particularly determined to be open until a point of closest safety object determination. [0056], [0058] existing lanes may be processed by recognizing their boundaries and corresponding travel direction and analyzed with their closures to determine how the vehicle should proceed based on its own travel direction, which includes eschewing traditional lane markings and connecting lines between objects, such as safety objects to determine a new boundary. This determines both traditional boundaries and the position the vehicle should be located in relation to them using the vehicle’s corresponding travel direction and determines constructed boundaries using objects and the position the vehicle should be located in relation to the constructed boundaries, at least by not being within the closed lane side of these boundaries and being within the open lane side, which is sidedness and allowed travel location. [0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and the lane shape is determined by the singular or combination of trained machine learning models, which determines open and closed lane statuses by bounding the lanes. Perceived lane statuses are perception data, which is used by machine learning model to generate alternative lane shapes, which must also receive other perception data as alternative lane shapes are generated within perceived environment.) output each of the plurality of perceived boundaries; ([0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and machine learning model generates alternative lane shapes from perceived lane statuses. This outputs perceived boundaries based on received perception data and analysis of received perception data.) and for each of the plurality of perceived boundaries for which an associated sidedness attribute is detected, output the associated sidedness attribute therefor; ([0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and machine learning model generates alternative lane shapes from perceived lane statuses. This outputs an open status for an alternative lane shape and a closed status elsewhere, where the vehicle may travel within the open status alternative lane shape. This sidedness is an attribute of the lane boundaries and lane shape denoting that one side of a lane boundary is an active or open side and the other side is an inactive of closed side.) and control the autonomous vehicle using the plurality of perceived boundaries and the sidedness attributes thereof. ([0031] one or more trajectories for the vehicle may be determined based on closed and open lane status and boundaries of the lanes, which is then used to control the vehicle.) In regards to claim 2, Ma teaches the autonomous vehicle control system of claim 1, wherein the trained machine learning model is further configured to integrate detection of perceived boundaries associated with a plurality of semantic boundary types for the roadway. ([0056] traditional lanes may be determined bounded by traditional lane markers, for example hashed lane markers and double yellow line. [0025], [0034], [0058] boundaries may also be constructed by connecting objects, which when safety objects, include a collection of traffic cones and construction. These are at least semantic boundary types of hashed lane markers, double lines, and virtual boundaries which may represent construction.) In regards to claim 3, Ma teaches the autonomous vehicle control system of claim 2, wherein the plurality of semantic boundary types includes two or more of a virtual construction semantic boundary type, a physical barrier semantic boundary type, a painted lane semantic boundary type, and a road edge semantic boundary type. ([0056] traditional lanes may be determined bounded by traditional lane markers, for example hashed lane markers and double yellow line. [0025], [0034], [0058] boundaries may also be constructed by connecting objects, which when safety objects, include a collection of traffic cones and construction. These are at least semantic boundary types of hashed lane markers, double lines, which are both painted lane boundary types, and virtual boundaries which may represent construction.) In regards to claim 4, Ma teaches the autonomous vehicle control system of claim 1, wherein the trained machine learning model is further configured to integrate detection of perceived pathways associated with an ego vehicle and/or other vehicles on the roadway. ([0061] machine learning model may be configured to generate alternative lane shapes, which are pathways for the vehicle.) In regards to claim 7, Ma teaches the autonomous vehicle control system of claim 1, wherein the trained machine learning model includes at least one memory and is further configured to track perceived boundaries over a plurality of intervals by persisting one or more features over one or more intervals. ([0061] machine learning model may be trained to generate alternative lane shapes, which are pathways for the vehicle, which are used to update map information. This stores the information at least over a time period of a current use to a subsequent use within map data. [0069], [0070] as the processor performs operations using the memory, including the machine learning model, the operations of the machine learning model are at least temporarily stored within the memory as well, which includes multiple processing cycles.) In regards to claim 8, Ma teaches an autonomous vehicle control system for an autonomous vehicle, comprising: (Fig 1,5.) one or more processors; ([0069], [0070] processors perform operations.) and memory storing instructions that, when executed by the one or more processors, cause the autonomous vehicle control system to: ([0069], [0070] processors perform operations stored in memory to control vehicle.) receive perception data from at least one perception sensor positioned to sense a roadway upon which the autonomous vehicle is disposed; ([0020], [0024], [0025] vehicle and vehicle’s perception engine may receive sensor data from depth sensors, image sensors, and other perception sensors.) generate a plurality of perceived boundaries for the roadway by processing the perception data using a trained machine learning model that integrates detection of boundaries with detection of attributes for at least a subset of the plurality of perceived boundaries, wherein a first perceived boundary of the plurality of perceived boundaries that is generated using the trained machine learning model is defined by a plurality of spaced apart construction elements that are detected and linked together into a virtual boundary, and wherein the trained machine learning model is configured to receive the perception data, and in response to receiving the perception data; ([0025] objects perceived by the sensors in the environment of the vehicle may be detected and classified using machine learning model, including set as a safety class. [0027], [0029] when at least one object is classified as a safety object, lane closure analysis is triggered, analyzing the current lane and any adjacent lanes to determine if the lane is closed or open, where the lane is particularly determined to be open until a point of closest safety object determination. [0056], [0058] existing lanes may be processed by recognizing their boundaries and corresponding travel direction and analyzed with their closures to determine how the vehicle should proceed based on its own travel direction, which includes eschewing traditional lane markings and connecting lines between objects, such as safety objects to determine a new boundary. This determines both traditional boundaries and the position the vehicle should be located in relation to them using the vehicle’s corresponding travel direction and determines constructed boundaries using objects and the position the vehicle should be located in relation to the constructed boundaries, at least by not being within the closed lane side of these boundaries and being within the open lane side. [0025], [0034], [0058] boundaries may be constructed by connecting objects, which when safety objects, include a collection of traffic cones and construction, which form a virtual boundary. [0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and the lane shape is determined by the singular or combination of trained machine learning models, which determines open and closed lane statuses by bounding the lanes. Perceived lane statuses are perception data, which is used by machine learning model to generate alternative lane shapes, which must also receive other perception data as alternative lane shapes are generated within perceived environment.) output each of the plurality of perceived boundaries; ([0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and machine learning model generates alternative lane shapes from perceived lane statuses. This outputs perceived boundaries based on received perception data and analysis of received perception data.) and for each of the plurality of perceived boundaries for which an associated attribute is detected, output the associated attribute therefor; ([0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and machine learning model generates alternative lane shapes from perceived lane statuses. This outputs an open status for an alternative lane shape and a closed status elsewhere, where the vehicle may travel within the open status alternative lane shape. This sidedness is an attribute of the lane boundaries and lane shape denoting that one side of a lane boundary is an active or open side and the other side is an inactive of closed side.) determine one or more active lanes using the plurality of perceived boundaries; ([0031], [0056], [0058] existing lanes may be processed by recognizing their boundaries and corresponding travel direction and analyzed with their closures to determine how the vehicle should proceed based on its own travel direction, which includes eschewing traditional lane markings and connecting lines between objects, such as safety objects to determine a new boundary. This determines both traditional boundaries and the position the vehicle should be located in relation to them using the vehicle’s corresponding travel direction and determines constructed boundaries using objects and the position the vehicle should be located in relation to the constructed boundaries, at least by not being within the closed lane side of these boundaries and being within the open lane side. This determines closed and inactive lanes and open and active lanes using the boundaries. Lanes may be further alternatively made active when the vehicle is within or planned to be within those lanes.) and control the autonomous vehicle using the plurality of perceived boundaries and the attributes thereof. ([0031] one or more trajectories for the vehicle may be determined based on closed and open lane status and boundaries of the lanes, which is then used to control the vehicle.) In regards to claim 9, Ma teaches the autonomous vehicle control system of claim 8. Claim 9 recites a system having substantially the same features of claim 2 above, therefore claim 9 is rejected for the same reasons as claim 2. In regards to claim 10, Ma teaches the autonomous vehicle control system of claim 9. Claim 10 recites a system having substantially the same features of claim 3 above, therefore claim 10 is rejected for the same reasons as claim 3. In regards to claim 11, Ma teaches the autonomous vehicle control system of claim 8. Claim 11 recites a system having substantially the same features of claim 4 above, therefore claim 11 is rejected for the same reasons as claim 4. In regards to claim 14, Ma teaches the autonomous vehicle control system of claim 8. Claim 14 recites a system having substantially the same features of claim 7 above, therefore claim 14 is rejected for the same reasons as claim 7. In regards to claim 15, Ma teaches the autonomous vehicle control system of claim 8, wherein the attributes include sidedness attributes identifying drivable sides of associated perceived boundaries. ([0027], [0029] when at least one object is classified as a safety object, lane closure analysis is triggered, analyzing the current lane and any adjacent lanes to determine if the lane is closed or open, where the lane is particularly determined to be open until a point of closest safety object determination. [0058] boundaries are constructed by connecting objects to determine shape of open lane and closed lane, where the vehicle is not allowed to be within the closed lane but is allowed to be within the open lane, which is sidedness.) In regards to claim 16, Ma teaches the autonomous vehicle control system of claim 8, wherein the attributes include active attributes identifying active states of associated perceived boundaries. ([0013] type of closure may be determined such as roadway construction, road cleaning, and the like, which are active attributes associated with the states of the perceived boundaries marking a closed lane section.) In regards to claim 17, Ma teaches the autonomous vehicle control system of claim 16, wherein the active attributes identify active construction proximate associated perceived boundaries. ([0013] type of closure may be determined such as roadway construction, road cleaning, and the like, which identifies construction associated with the perceived boundaries marking a closed lane section.) In regards to claim 18, Ma teaches the autonomous vehicle control system of claim 16, wherein the active attributes identify potential hazards proximate associated perceived boundaries. ([0013] type of closure may be determined such as roadway construction, road cleaning for example for snow, and the like, which identifies at least weather related hazards near the boundaries marking a closed lane section.) In regards to claim 19, Ma teaches a method of operating an autonomous vehicle with an autonomous vehicle control system, comprising: (Figs 2A-2E.) receiving perception data from at least one perception sensor positioned to sense a roadway upon which the autonomous vehicle is disposed; ([0020], [0024], [0025] vehicle and vehicle’s perception engine may receive sensor data from depth sensors, image sensors, and other perception sensors. [0033] this occurs in step 202.) generating a plurality of perceived boundaries for the roadway by processing the perception data using a trained machine learning model that integrates detection of boundaries with detection of attributes for at least a subset of the plurality of perceived boundaries, wherein a first perceived boundary of the plurality of perceived boundaries that is generated using the trained machine learning model is defined by a plurality of spaced apart construction elements that are detected and linked together into a virtual boundary, and wherein the trained machine learning model is configured to receive the perception data, and in response to receiving the perception data; ([0025] objects perceived by the sensors in the environment of the vehicle may be detected and classified using machine learning model, including set as a safety class. [0027], [0029] when at least one object is classified as a safety object, lane closure analysis is triggered, analyzing the current lane and any adjacent lanes to determine if the lane is closed or open, where the lane is particularly determined to be open until a point of closest safety object determination. [0056], [0058] existing lanes may be processed by recognizing their boundaries and corresponding travel direction and analyzed with their closures to determine how the vehicle should proceed based on its own travel direction, which includes eschewing traditional lane markings and connecting lines between objects, such as safety objects to determine a new boundary. This determines both traditional boundaries and the position the vehicle should be located in relation to them using the vehicle’s corresponding travel direction and determines constructed boundaries using objects and the position the vehicle should be located in relation to the constructed boundaries, at least by not being within the closed lane side of these boundaries and being within the open lane side. 0025], [0034], [0058] boundaries may be constructed by connecting objects, which when safety objects, include a collection of traffic cones and construction, which form a virtual boundary. [0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and the lane shape is determined by the singular or combination of trained machine learning models, which determines open and closed lane statuses by bounding the lanes. Perceived lane statuses are perception data, which is used by machine learning model to generate alternative lane shapes, which must also receive other perception data as alternative lane shapes are generated within perceived environment.) output each of the plurality of perceived boundaries; ([0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and machine learning model generates alternative lane shapes from perceived lane statuses. This outputs perceived boundaries based on received perception data and analysis of received perception data.) and for each of the plurality of perceived boundaries for which an associated attribute is detected, output the associated attribute therefore; ([0024], [0061] perception engine includes machine learning model that identifies information of the environment including objects and lane open and closed status and machine learning model generates alternative lane shapes from perceived lane statuses. This outputs an open status for an alternative lane shape and a closed status elsewhere, where the vehicle may travel within the open status alternative lane shape. This sidedness is an attribute of the lane boundaries and lane shape denoting that one side of a lane boundary is an active or open side and the other side is an inactive of closed side.) determining one or more active lanes using the plurality of perceived boundaries; ([0056], [0058] existing lanes may be processed by recognizing their boundaries and corresponding travel direction and analyzed with their closures to determine how the vehicle should proceed based on its own travel direction, which includes eschewing traditional lane markings and connecting lines between objects, such as safety objects to determine a new boundary. This determines both traditional boundaries and the position the vehicle should be located in relation to them using the vehicle’s corresponding travel direction and determines constructed boundaries using objects and the position the vehicle should be located in relation to the constructed boundaries, at least by not being within the closed lane side of these boundaries and being within the open lane side. This determines closed and inactive lanes and open and active lanes using the boundaries.) and controlling the autonomous vehicle using the one or more lanes and the plurality of perceived boundaries. ([0031] one or more trajectories for the vehicle may be determined based on closed and open lane status and boundaries of the lanes, which is then used to control the vehicle.) In regards to claim 21, Ma teaches the autonomous vehicle control system of claim 1, wherein the autonomous vehicle control system is further configured to augment a digital map with the perceived boundaries and associated attributes therefor output by the trained machine learning model to generate an augmented digital map, and to control the autonomous vehicle using the plurality of perceived boundaries and the sidedness attributes thereof by determining one or more active lanes using the augmented digital map. ([0018], [0053], [0061] map is updated to indicate lanes are closed or open, including alternative lane shapes, which necessarily includes their boundaries, where alternative lane shapes with open and closed statuses of lane sections are generated by machine learning model. [0031] one or more trajectories for the vehicle may be determined based on closed and open lane status and boundaries of the lanes, which is then used to control the vehicle, which causes the vehicle to travel through open lane status segments and not travel through closed lane status segments, including traveling along alternative lane shapes as required.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 6, 12, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Ma in view of Non-patent Literature Rath et al. “Multi-Head Deep Learning Models for Multi-Label Classification” (“Rath”). In regards to claim 5, Ma teaches the autonomous vehicle control system of claim 1, Ma also teaches objects perceived by the sensors in the environment of the vehicle may be detected and classified using machine learning model, including set as a safety class ([0025]). A machine learning model may also be used to determine lane shape ([0061]). Ma does not teach: wherein the trained machine learning model is a multi-head machine learning model including a plurality of output heads, the plurality of output heads including at least one boundary output head that outputs the plurality of perceived boundaries and the sidedness attributes thereof and at least one mainline perception output head that outputs a plurality of objects detected in a vicinity of the autonomous vehicle. However, Rath teaches using a multi-headed deep learning model which allows for providing multiple labels for more complex data (Pages 1-3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the vehicle system of Ma, by incorporating the teachings of Rath, such that the different machine learning models may be particularly implemented as a single multi-headed machine learning model, where one head performs object identification and other performs lane shape identification determining particularly which lanes sections are closed and open. The motivation to do so is that, as acknowledged by Rath, such a multi-headed machine learning model allows for proper labeling of complex data (Pages 1-3), which one of ordinary skill would have recognized is highly efficient. In regards to claim 6, Ma, as modified by Rath, teaches the autonomous vehicle control system of claim 5, wherein the plurality of objects includes other vehicles, pedestrians, and/or construction elements in the roadway. ([0010], [0015], [0025] classification of objects includes detection of pedestrians, workers, signage and traffic cones, for example, which includes construction elements.) In regards to claim 12, Ma teaches the autonomous vehicle control system of claim 8. Claim 12 recites a system having substantially the same features of claim 5 above, therefore claim 12 is rejected for the same reasons as claim 5. In regards to claim 13, Ma, as modified by Rath, teaches the autonomous vehicle control system of claim 12. Claim 13 recites a system having substantially the same features of claim 6 above, therefore claim 13 is rejected for the same reasons as claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hetang et al. (US 20220402520) teaches generating a synthetic scene for a vehicle by using artifacts as lane barriers. Djuric et al. (US 20200209857) teaches selection by a vehicle between different navigation modes. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHIAS S WEISFELD whose telephone number is (571)272-7258. The examiner can normally be reached Monday-Thursday 7:00 AM - 4:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya Burgess can be reached at Ramya.Burgess@USPTO.GOV. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHIAS S WEISFELD/Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
Apr 30, 2025
Non-Final Rejection — §102, §103
Jul 29, 2025
Examiner Interview Summary
Jul 29, 2025
Applicant Interview (Telephonic)
Aug 04, 2025
Response Filed
Sep 03, 2025
Final Rejection — §102, §103
Nov 13, 2025
Response after Non-Final Action
Dec 10, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §102, §103
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600360
VEHICLE AND METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12600233
VEHICLE DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597271
SYSTEMS AND METHODS FOR USING IMAGE DATA TO ANALYZE AN IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12584760
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12576865
CONTROL SYSTEM TESTING UTILIZING RULEBOOK SCENARIO GENERATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
78%
With Interview (+18.7%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 174 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month